00:00:00.002 Started by upstream project "autotest-per-patch" build number 132369 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.056 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.077 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.103 Using shallow fetch with depth 1 00:00:00.103 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.103 > git --version # timeout=10 00:00:00.127 > git --version # 'git version 2.39.2' 00:00:00.127 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.154 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.154 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.495 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.508 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.519 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.519 > git config core.sparsecheckout # timeout=10 00:00:03.530 > git read-tree -mu HEAD # timeout=10 00:00:03.547 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.571 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.571 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.678 [Pipeline] Start of Pipeline 00:00:03.692 [Pipeline] library 00:00:03.693 Loading library shm_lib@master 00:00:03.694 Library shm_lib@master is cached. Copying from home. 00:00:03.708 [Pipeline] node 00:00:03.718 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_3 00:00:03.719 [Pipeline] { 00:00:03.729 [Pipeline] catchError 00:00:03.730 [Pipeline] { 00:00:03.745 [Pipeline] wrap 00:00:03.754 [Pipeline] { 00:00:03.766 [Pipeline] stage 00:00:03.767 [Pipeline] { (Prologue) 00:00:03.788 [Pipeline] echo 00:00:03.790 Node: VM-host-WFP7 00:00:03.797 [Pipeline] cleanWs 00:00:03.806 [WS-CLEANUP] Deleting project workspace... 00:00:03.806 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.813 [WS-CLEANUP] done 00:00:04.056 [Pipeline] setCustomBuildProperty 00:00:04.130 [Pipeline] httpRequest 00:00:04.433 [Pipeline] echo 00:00:04.435 Sorcerer 10.211.164.20 is alive 00:00:04.443 [Pipeline] retry 00:00:04.445 [Pipeline] { 00:00:04.462 [Pipeline] httpRequest 00:00:04.467 HttpMethod: GET 00:00:04.468 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.469 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.469 Response Code: HTTP/1.1 200 OK 00:00:04.470 Success: Status code 200 is in the accepted range: 200,404 00:00:04.470 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.615 [Pipeline] } 00:00:04.630 [Pipeline] // retry 00:00:04.637 [Pipeline] sh 00:00:04.918 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.931 [Pipeline] httpRequest 00:00:05.711 [Pipeline] echo 00:00:05.713 Sorcerer 10.211.164.20 is alive 00:00:05.721 [Pipeline] retry 00:00:05.723 [Pipeline] { 00:00:05.736 [Pipeline] httpRequest 00:00:05.740 HttpMethod: GET 00:00:05.740 URL: http://10.211.164.20/packages/spdk_2741dd1ac0c0ecd0ce07c22046b63fcee1db3eed.tar.gz 00:00:05.741 Sending request to url: http://10.211.164.20/packages/spdk_2741dd1ac0c0ecd0ce07c22046b63fcee1db3eed.tar.gz 00:00:05.742 Response Code: HTTP/1.1 200 OK 00:00:05.743 Success: Status code 200 is in the accepted range: 200,404 00:00:05.743 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/spdk_2741dd1ac0c0ecd0ce07c22046b63fcee1db3eed.tar.gz 00:00:29.264 [Pipeline] } 00:00:29.283 [Pipeline] // retry 00:00:29.291 [Pipeline] sh 00:00:29.571 + tar --no-same-owner -xf spdk_2741dd1ac0c0ecd0ce07c22046b63fcee1db3eed.tar.gz 00:00:32.117 [Pipeline] sh 00:00:32.396 + git -C spdk log --oneline -n5 00:00:32.396 2741dd1ac test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:00:32.396 4f0cbdcd1 test/nvmf: Remove all transport conditions from the test suites 00:00:32.396 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:00:32.396 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:00:32.396 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:00:32.414 [Pipeline] writeFile 00:00:32.428 [Pipeline] sh 00:00:32.708 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:32.719 [Pipeline] sh 00:00:32.998 + cat autorun-spdk.conf 00:00:32.998 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.998 SPDK_RUN_ASAN=1 00:00:32.998 SPDK_RUN_UBSAN=1 00:00:32.998 SPDK_TEST_RAID=1 00:00:32.998 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.005 RUN_NIGHTLY=0 00:00:33.007 [Pipeline] } 00:00:33.023 [Pipeline] // stage 00:00:33.040 [Pipeline] stage 00:00:33.042 [Pipeline] { (Run VM) 00:00:33.055 [Pipeline] sh 00:00:33.334 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:33.334 + echo 'Start stage prepare_nvme.sh' 00:00:33.334 Start stage prepare_nvme.sh 00:00:33.334 + [[ -n 6 ]] 00:00:33.334 + disk_prefix=ex6 00:00:33.334 + [[ -n /var/jenkins/workspace/raid-vg-autotest_3 ]] 00:00:33.334 + [[ -e /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf ]] 00:00:33.334 + source /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf 00:00:33.334 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.334 ++ SPDK_RUN_ASAN=1 00:00:33.334 ++ SPDK_RUN_UBSAN=1 00:00:33.334 ++ SPDK_TEST_RAID=1 00:00:33.334 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.334 ++ RUN_NIGHTLY=0 00:00:33.334 + cd /var/jenkins/workspace/raid-vg-autotest_3 00:00:33.334 + nvme_files=() 00:00:33.334 + declare -A nvme_files 00:00:33.334 + backend_dir=/var/lib/libvirt/images/backends 00:00:33.334 + nvme_files['nvme.img']=5G 00:00:33.334 + nvme_files['nvme-cmb.img']=5G 00:00:33.334 + nvme_files['nvme-multi0.img']=4G 00:00:33.334 + nvme_files['nvme-multi1.img']=4G 00:00:33.334 + nvme_files['nvme-multi2.img']=4G 00:00:33.334 + nvme_files['nvme-openstack.img']=8G 00:00:33.334 + nvme_files['nvme-zns.img']=5G 00:00:33.334 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:33.334 + (( SPDK_TEST_FTL == 1 )) 00:00:33.334 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:33.334 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:33.334 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:33.334 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:33.334 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:33.334 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:33.334 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:33.334 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.334 + for nvme in "${!nvme_files[@]}" 00:00:33.334 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:33.593 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.593 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:33.593 + echo 'End stage prepare_nvme.sh' 00:00:33.593 End stage prepare_nvme.sh 00:00:33.604 [Pipeline] sh 00:00:33.885 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.885 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:00:33.885 00:00:33.885 DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant 00:00:33.885 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk 00:00:33.885 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_3 00:00:33.885 HELP=0 00:00:33.885 DRY_RUN=0 00:00:33.885 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:00:33.885 NVME_DISKS_TYPE=nvme,nvme, 00:00:33.885 NVME_AUTO_CREATE=0 00:00:33.885 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:00:33.885 NVME_CMB=,, 00:00:33.885 NVME_PMR=,, 00:00:33.885 NVME_ZNS=,, 00:00:33.885 NVME_MS=,, 00:00:33.885 NVME_FDP=,, 00:00:33.885 SPDK_VAGRANT_DISTRO=fedora39 00:00:33.885 SPDK_VAGRANT_VMCPU=10 00:00:33.885 SPDK_VAGRANT_VMRAM=12288 00:00:33.885 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.885 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.885 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.885 SPDK_OPENSTACK_NETWORK=0 00:00:33.885 VAGRANT_PACKAGE_BOX=0 00:00:33.885 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:00:33.885 FORCE_DISTRO=true 00:00:33.885 VAGRANT_BOX_VERSION= 00:00:33.885 EXTRA_VAGRANTFILES= 00:00:33.885 NIC_MODEL=virtio 00:00:33.885 00:00:33.885 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt' 00:00:33.885 /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_3 00:00:36.418 Bringing machine 'default' up with 'libvirt' provider... 00:00:36.679 ==> default: Creating image (snapshot of base box volume). 00:00:36.679 ==> default: Creating domain with the following settings... 00:00:36.679 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732093981_7054d3e7d25a989d6364 00:00:36.679 ==> default: -- Domain type: kvm 00:00:36.679 ==> default: -- Cpus: 10 00:00:36.679 ==> default: -- Feature: acpi 00:00:36.679 ==> default: -- Feature: apic 00:00:36.679 ==> default: -- Feature: pae 00:00:36.679 ==> default: -- Memory: 12288M 00:00:36.679 ==> default: -- Memory Backing: hugepages: 00:00:36.679 ==> default: -- Management MAC: 00:00:36.679 ==> default: -- Loader: 00:00:36.679 ==> default: -- Nvram: 00:00:36.679 ==> default: -- Base box: spdk/fedora39 00:00:36.679 ==> default: -- Storage pool: default 00:00:36.679 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732093981_7054d3e7d25a989d6364.img (20G) 00:00:36.679 ==> default: -- Volume Cache: default 00:00:36.679 ==> default: -- Kernel: 00:00:36.679 ==> default: -- Initrd: 00:00:36.679 ==> default: -- Graphics Type: vnc 00:00:36.679 ==> default: -- Graphics Port: -1 00:00:36.679 ==> default: -- Graphics IP: 127.0.0.1 00:00:36.679 ==> default: -- Graphics Password: Not defined 00:00:36.679 ==> default: -- Video Type: cirrus 00:00:36.679 ==> default: -- Video VRAM: 9216 00:00:36.679 ==> default: -- Sound Type: 00:00:36.679 ==> default: -- Keymap: en-us 00:00:36.679 ==> default: -- TPM Path: 00:00:36.679 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:36.679 ==> default: -- Command line args: 00:00:36.679 ==> default: -> value=-device, 00:00:36.679 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:36.679 ==> default: -> value=-drive, 00:00:36.679 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:00:36.679 ==> default: -> value=-device, 00:00:36.679 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.679 ==> default: -> value=-device, 00:00:36.679 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:36.679 ==> default: -> value=-drive, 00:00:36.679 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:36.679 ==> default: -> value=-device, 00:00:36.679 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.679 ==> default: -> value=-drive, 00:00:36.679 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:36.679 ==> default: -> value=-device, 00:00:36.679 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.679 ==> default: -> value=-drive, 00:00:36.679 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:36.679 ==> default: -> value=-device, 00:00:36.679 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.939 ==> default: Creating shared folders metadata... 00:00:36.939 ==> default: Starting domain. 00:00:37.878 ==> default: Waiting for domain to get an IP address... 00:00:55.975 ==> default: Waiting for SSH to become available... 00:00:55.975 ==> default: Configuring and enabling network interfaces... 00:01:01.254 default: SSH address: 192.168.121.135:22 00:01:01.254 default: SSH username: vagrant 00:01:01.254 default: SSH auth method: private key 00:01:03.792 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:11.916 ==> default: Mounting SSHFS shared folder... 00:01:14.454 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:14.454 ==> default: Checking Mount.. 00:01:15.833 ==> default: Folder Successfully Mounted! 00:01:15.833 ==> default: Running provisioner: file... 00:01:16.771 default: ~/.gitconfig => .gitconfig 00:01:17.338 00:01:17.338 SUCCESS! 00:01:17.338 00:01:17.338 cd to /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:01:17.338 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.339 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:01:17.339 00:01:17.347 [Pipeline] } 00:01:17.362 [Pipeline] // stage 00:01:17.372 [Pipeline] dir 00:01:17.372 Running in /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt 00:01:17.375 [Pipeline] { 00:01:17.388 [Pipeline] catchError 00:01:17.390 [Pipeline] { 00:01:17.404 [Pipeline] sh 00:01:17.686 + vagrant ssh-config --host vagrant 00:01:17.686 + sed -ne /^Host/,$p 00:01:17.686 + tee ssh_conf 00:01:20.222 Host vagrant 00:01:20.222 HostName 192.168.121.135 00:01:20.222 User vagrant 00:01:20.222 Port 22 00:01:20.222 UserKnownHostsFile /dev/null 00:01:20.222 StrictHostKeyChecking no 00:01:20.222 PasswordAuthentication no 00:01:20.222 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:20.222 IdentitiesOnly yes 00:01:20.222 LogLevel FATAL 00:01:20.222 ForwardAgent yes 00:01:20.222 ForwardX11 yes 00:01:20.222 00:01:20.236 [Pipeline] withEnv 00:01:20.239 [Pipeline] { 00:01:20.254 [Pipeline] sh 00:01:20.538 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:20.538 source /etc/os-release 00:01:20.538 [[ -e /image.version ]] && img=$(< /image.version) 00:01:20.538 # Minimal, systemd-like check. 00:01:20.538 if [[ -e /.dockerenv ]]; then 00:01:20.538 # Clear garbage from the node's name: 00:01:20.538 # agt-er_autotest_547-896 -> autotest_547-896 00:01:20.538 # $HOSTNAME is the actual container id 00:01:20.538 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:20.538 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:20.538 # We can assume this is a mount from a host where container is running, 00:01:20.538 # so fetch its hostname to easily identify the target swarm worker. 00:01:20.538 container="$(< /etc/hostname) ($agent)" 00:01:20.538 else 00:01:20.538 # Fallback 00:01:20.538 container=$agent 00:01:20.538 fi 00:01:20.538 fi 00:01:20.538 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:20.538 00:01:20.811 [Pipeline] } 00:01:20.829 [Pipeline] // withEnv 00:01:20.840 [Pipeline] setCustomBuildProperty 00:01:20.858 [Pipeline] stage 00:01:20.861 [Pipeline] { (Tests) 00:01:20.879 [Pipeline] sh 00:01:21.184 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:21.456 [Pipeline] sh 00:01:21.737 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:22.012 [Pipeline] timeout 00:01:22.013 Timeout set to expire in 1 hr 30 min 00:01:22.015 [Pipeline] { 00:01:22.029 [Pipeline] sh 00:01:22.315 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:22.885 HEAD is now at 2741dd1ac test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:01:22.897 [Pipeline] sh 00:01:23.182 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:23.457 [Pipeline] sh 00:01:23.744 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:24.034 [Pipeline] sh 00:01:24.320 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:24.580 ++ readlink -f spdk_repo 00:01:24.580 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:24.580 + [[ -n /home/vagrant/spdk_repo ]] 00:01:24.580 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:24.580 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:24.580 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:24.580 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:24.580 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:24.580 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:24.580 + cd /home/vagrant/spdk_repo 00:01:24.580 + source /etc/os-release 00:01:24.580 ++ NAME='Fedora Linux' 00:01:24.580 ++ VERSION='39 (Cloud Edition)' 00:01:24.580 ++ ID=fedora 00:01:24.580 ++ VERSION_ID=39 00:01:24.580 ++ VERSION_CODENAME= 00:01:24.580 ++ PLATFORM_ID=platform:f39 00:01:24.580 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:24.580 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.580 ++ LOGO=fedora-logo-icon 00:01:24.580 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:24.580 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.580 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:24.580 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.580 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.580 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.580 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:24.580 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.580 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:24.580 ++ SUPPORT_END=2024-11-12 00:01:24.580 ++ VARIANT='Cloud Edition' 00:01:24.580 ++ VARIANT_ID=cloud 00:01:24.580 + uname -a 00:01:24.580 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:24.580 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:25.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:25.150 Hugepages 00:01:25.150 node hugesize free / total 00:01:25.150 node0 1048576kB 0 / 0 00:01:25.150 node0 2048kB 0 / 0 00:01:25.150 00:01:25.150 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.150 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:25.150 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:25.150 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:25.150 + rm -f /tmp/spdk-ld-path 00:01:25.150 + source autorun-spdk.conf 00:01:25.150 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.150 ++ SPDK_RUN_ASAN=1 00:01:25.150 ++ SPDK_RUN_UBSAN=1 00:01:25.150 ++ SPDK_TEST_RAID=1 00:01:25.150 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.150 ++ RUN_NIGHTLY=0 00:01:25.150 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.150 + [[ -n '' ]] 00:01:25.150 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:25.150 + for M in /var/spdk/build-*-manifest.txt 00:01:25.150 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:25.150 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.409 + for M in /var/spdk/build-*-manifest.txt 00:01:25.409 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.409 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.409 + for M in /var/spdk/build-*-manifest.txt 00:01:25.409 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.409 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.409 ++ uname 00:01:25.409 + [[ Linux == \L\i\n\u\x ]] 00:01:25.409 + sudo dmesg -T 00:01:25.409 + sudo dmesg --clear 00:01:25.409 + dmesg_pid=5425 00:01:25.409 + sudo dmesg -Tw 00:01:25.409 + [[ Fedora Linux == FreeBSD ]] 00:01:25.409 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.409 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.409 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.409 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.409 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.409 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.409 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.409 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.409 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.409 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.409 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.409 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.409 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.409 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.409 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:25.409 09:13:50 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:25.409 09:13:50 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:25.409 09:13:50 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.409 09:13:50 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:25.409 09:13:50 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:25.409 09:13:50 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:25.409 09:13:50 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.409 09:13:50 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:25.409 09:13:50 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:25.409 09:13:50 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:25.670 09:13:50 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:25.670 09:13:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:25.670 09:13:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:25.670 09:13:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.670 09:13:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.670 09:13:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.670 09:13:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.670 09:13:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.670 09:13:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.670 09:13:50 -- paths/export.sh@5 -- $ export PATH 00:01:25.670 09:13:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.670 09:13:50 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:25.670 09:13:50 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:25.670 09:13:50 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732094030.XXXXXX 00:01:25.670 09:13:50 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732094030.8ssbYO 00:01:25.670 09:13:50 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:25.670 09:13:50 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:25.670 09:13:50 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:25.670 09:13:50 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:25.670 09:13:50 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.670 09:13:50 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:25.670 09:13:50 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:25.670 09:13:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.670 09:13:50 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:25.670 09:13:50 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:25.670 09:13:50 -- pm/common@17 -- $ local monitor 00:01:25.670 09:13:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.670 09:13:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.670 09:13:50 -- pm/common@25 -- $ sleep 1 00:01:25.670 09:13:50 -- pm/common@21 -- $ date +%s 00:01:25.670 09:13:50 -- pm/common@21 -- $ date +%s 00:01:25.670 09:13:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732094030 00:01:25.670 09:13:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732094030 00:01:25.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732094030_collect-cpu-load.pm.log 00:01:25.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732094030_collect-vmstat.pm.log 00:01:26.608 09:13:51 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:26.608 09:13:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.608 09:13:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.608 09:13:51 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:26.608 09:13:51 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.608 Wed Nov 20 09:13:51 AM UTC 2024 00:01:26.608 09:13:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.608 v25.01-pre-205-g2741dd1ac 00:01:26.608 09:13:51 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:26.608 09:13:51 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:26.608 09:13:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:26.608 09:13:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:26.608 09:13:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.608 ************************************ 00:01:26.608 START TEST asan 00:01:26.608 ************************************ 00:01:26.608 using asan 00:01:26.608 09:13:51 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:26.608 00:01:26.608 real 0m0.000s 00:01:26.608 user 0m0.000s 00:01:26.608 sys 0m0.000s 00:01:26.608 09:13:51 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:26.608 09:13:51 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.608 ************************************ 00:01:26.608 END TEST asan 00:01:26.608 ************************************ 00:01:26.608 09:13:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:26.608 09:13:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:26.608 09:13:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:26.608 09:13:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:26.608 09:13:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.867 ************************************ 00:01:26.867 START TEST ubsan 00:01:26.867 ************************************ 00:01:26.867 using ubsan 00:01:26.867 09:13:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:26.867 00:01:26.867 real 0m0.000s 00:01:26.867 user 0m0.000s 00:01:26.867 sys 0m0.000s 00:01:26.867 09:13:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:26.867 09:13:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.867 ************************************ 00:01:26.867 END TEST ubsan 00:01:26.867 ************************************ 00:01:26.867 09:13:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:26.867 09:13:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:26.867 09:13:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:26.867 09:13:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:26.867 09:13:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:26.867 09:13:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:26.867 09:13:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:26.867 09:13:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:26.867 09:13:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:26.867 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:26.867 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:27.437 Using 'verbs' RDMA provider 00:01:43.271 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:01.377 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:01.377 Creating mk/config.mk...done. 00:02:01.377 Creating mk/cc.flags.mk...done. 00:02:01.377 Type 'make' to build. 00:02:01.377 09:14:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:01.377 09:14:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:01.377 09:14:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:01.377 09:14:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.377 ************************************ 00:02:01.377 START TEST make 00:02:01.377 ************************************ 00:02:01.377 09:14:25 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:01.377 make[1]: Nothing to be done for 'all'. 00:02:11.378 The Meson build system 00:02:11.378 Version: 1.5.0 00:02:11.378 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:11.378 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:11.378 Build type: native build 00:02:11.378 Program cat found: YES (/usr/bin/cat) 00:02:11.378 Project name: DPDK 00:02:11.378 Project version: 24.03.0 00:02:11.378 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.378 C linker for the host machine: cc ld.bfd 2.40-14 00:02:11.378 Host machine cpu family: x86_64 00:02:11.378 Host machine cpu: x86_64 00:02:11.378 Message: ## Building in Developer Mode ## 00:02:11.378 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.378 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:11.378 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.378 Program python3 found: YES (/usr/bin/python3) 00:02:11.378 Program cat found: YES (/usr/bin/cat) 00:02:11.378 Compiler for C supports arguments -march=native: YES 00:02:11.378 Checking for size of "void *" : 8 00:02:11.378 Checking for size of "void *" : 8 (cached) 00:02:11.378 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:11.378 Library m found: YES 00:02:11.378 Library numa found: YES 00:02:11.378 Has header "numaif.h" : YES 00:02:11.378 Library fdt found: NO 00:02:11.378 Library execinfo found: NO 00:02:11.378 Has header "execinfo.h" : YES 00:02:11.378 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.378 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.378 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.378 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.378 Run-time dependency openssl found: YES 3.1.1 00:02:11.378 Run-time dependency libpcap found: YES 1.10.4 00:02:11.378 Has header "pcap.h" with dependency libpcap: YES 00:02:11.378 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.378 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.378 Compiler for C supports arguments -Wformat: YES 00:02:11.378 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.378 Compiler for C supports arguments -Wformat-security: NO 00:02:11.378 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.378 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.378 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.378 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.378 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.378 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.378 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.378 Compiler for C supports arguments -Wundef: YES 00:02:11.378 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.378 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.378 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.378 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.378 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.378 Program objdump found: YES (/usr/bin/objdump) 00:02:11.378 Compiler for C supports arguments -mavx512f: YES 00:02:11.378 Checking if "AVX512 checking" compiles: YES 00:02:11.378 Fetching value of define "__SSE4_2__" : 1 00:02:11.378 Fetching value of define "__AES__" : 1 00:02:11.378 Fetching value of define "__AVX__" : 1 00:02:11.378 Fetching value of define "__AVX2__" : 1 00:02:11.378 Fetching value of define "__AVX512BW__" : 1 00:02:11.378 Fetching value of define "__AVX512CD__" : 1 00:02:11.378 Fetching value of define "__AVX512DQ__" : 1 00:02:11.378 Fetching value of define "__AVX512F__" : 1 00:02:11.378 Fetching value of define "__AVX512VL__" : 1 00:02:11.378 Fetching value of define "__PCLMUL__" : 1 00:02:11.378 Fetching value of define "__RDRND__" : 1 00:02:11.378 Fetching value of define "__RDSEED__" : 1 00:02:11.378 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:11.378 Fetching value of define "__znver1__" : (undefined) 00:02:11.378 Fetching value of define "__znver2__" : (undefined) 00:02:11.378 Fetching value of define "__znver3__" : (undefined) 00:02:11.378 Fetching value of define "__znver4__" : (undefined) 00:02:11.378 Library asan found: YES 00:02:11.378 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.378 Message: lib/log: Defining dependency "log" 00:02:11.378 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.378 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.378 Library rt found: YES 00:02:11.378 Checking for function "getentropy" : NO 00:02:11.378 Message: lib/eal: Defining dependency "eal" 00:02:11.378 Message: lib/ring: Defining dependency "ring" 00:02:11.378 Message: lib/rcu: Defining dependency "rcu" 00:02:11.378 Message: lib/mempool: Defining dependency "mempool" 00:02:11.378 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.378 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.378 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.378 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.378 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.378 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.378 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:11.378 Compiler for C supports arguments -mpclmul: YES 00:02:11.378 Compiler for C supports arguments -maes: YES 00:02:11.378 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.378 Compiler for C supports arguments -mavx512bw: YES 00:02:11.378 Compiler for C supports arguments -mavx512dq: YES 00:02:11.378 Compiler for C supports arguments -mavx512vl: YES 00:02:11.378 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.378 Compiler for C supports arguments -mavx2: YES 00:02:11.378 Compiler for C supports arguments -mavx: YES 00:02:11.378 Message: lib/net: Defining dependency "net" 00:02:11.378 Message: lib/meter: Defining dependency "meter" 00:02:11.378 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.378 Message: lib/pci: Defining dependency "pci" 00:02:11.378 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.378 Message: lib/hash: Defining dependency "hash" 00:02:11.378 Message: lib/timer: Defining dependency "timer" 00:02:11.378 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.378 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.378 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.378 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.378 Message: lib/power: Defining dependency "power" 00:02:11.378 Message: lib/reorder: Defining dependency "reorder" 00:02:11.378 Message: lib/security: Defining dependency "security" 00:02:11.378 Has header "linux/userfaultfd.h" : YES 00:02:11.378 Has header "linux/vduse.h" : YES 00:02:11.378 Message: lib/vhost: Defining dependency "vhost" 00:02:11.378 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:11.378 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.378 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:11.378 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:11.378 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:11.378 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:11.378 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:11.378 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:11.378 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:11.378 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:11.378 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:11.378 Configuring doxy-api-html.conf using configuration 00:02:11.378 Configuring doxy-api-man.conf using configuration 00:02:11.378 Program mandb found: YES (/usr/bin/mandb) 00:02:11.378 Program sphinx-build found: NO 00:02:11.378 Configuring rte_build_config.h using configuration 00:02:11.378 Message: 00:02:11.378 ================= 00:02:11.378 Applications Enabled 00:02:11.378 ================= 00:02:11.378 00:02:11.378 apps: 00:02:11.378 00:02:11.378 00:02:11.378 Message: 00:02:11.378 ================= 00:02:11.379 Libraries Enabled 00:02:11.379 ================= 00:02:11.379 00:02:11.379 libs: 00:02:11.379 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:11.379 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:11.379 cryptodev, dmadev, power, reorder, security, vhost, 00:02:11.379 00:02:11.379 Message: 00:02:11.379 =============== 00:02:11.379 Drivers Enabled 00:02:11.379 =============== 00:02:11.379 00:02:11.379 common: 00:02:11.379 00:02:11.379 bus: 00:02:11.379 pci, vdev, 00:02:11.379 mempool: 00:02:11.379 ring, 00:02:11.379 dma: 00:02:11.379 00:02:11.379 net: 00:02:11.379 00:02:11.379 crypto: 00:02:11.379 00:02:11.379 compress: 00:02:11.379 00:02:11.379 vdpa: 00:02:11.379 00:02:11.379 00:02:11.379 Message: 00:02:11.379 ================= 00:02:11.379 Content Skipped 00:02:11.379 ================= 00:02:11.379 00:02:11.379 apps: 00:02:11.379 dumpcap: explicitly disabled via build config 00:02:11.379 graph: explicitly disabled via build config 00:02:11.379 pdump: explicitly disabled via build config 00:02:11.379 proc-info: explicitly disabled via build config 00:02:11.379 test-acl: explicitly disabled via build config 00:02:11.379 test-bbdev: explicitly disabled via build config 00:02:11.379 test-cmdline: explicitly disabled via build config 00:02:11.379 test-compress-perf: explicitly disabled via build config 00:02:11.379 test-crypto-perf: explicitly disabled via build config 00:02:11.379 test-dma-perf: explicitly disabled via build config 00:02:11.379 test-eventdev: explicitly disabled via build config 00:02:11.379 test-fib: explicitly disabled via build config 00:02:11.379 test-flow-perf: explicitly disabled via build config 00:02:11.379 test-gpudev: explicitly disabled via build config 00:02:11.379 test-mldev: explicitly disabled via build config 00:02:11.379 test-pipeline: explicitly disabled via build config 00:02:11.379 test-pmd: explicitly disabled via build config 00:02:11.379 test-regex: explicitly disabled via build config 00:02:11.379 test-sad: explicitly disabled via build config 00:02:11.379 test-security-perf: explicitly disabled via build config 00:02:11.379 00:02:11.379 libs: 00:02:11.379 argparse: explicitly disabled via build config 00:02:11.379 metrics: explicitly disabled via build config 00:02:11.379 acl: explicitly disabled via build config 00:02:11.379 bbdev: explicitly disabled via build config 00:02:11.379 bitratestats: explicitly disabled via build config 00:02:11.379 bpf: explicitly disabled via build config 00:02:11.379 cfgfile: explicitly disabled via build config 00:02:11.379 distributor: explicitly disabled via build config 00:02:11.379 efd: explicitly disabled via build config 00:02:11.379 eventdev: explicitly disabled via build config 00:02:11.379 dispatcher: explicitly disabled via build config 00:02:11.379 gpudev: explicitly disabled via build config 00:02:11.379 gro: explicitly disabled via build config 00:02:11.379 gso: explicitly disabled via build config 00:02:11.379 ip_frag: explicitly disabled via build config 00:02:11.379 jobstats: explicitly disabled via build config 00:02:11.379 latencystats: explicitly disabled via build config 00:02:11.379 lpm: explicitly disabled via build config 00:02:11.379 member: explicitly disabled via build config 00:02:11.379 pcapng: explicitly disabled via build config 00:02:11.379 rawdev: explicitly disabled via build config 00:02:11.379 regexdev: explicitly disabled via build config 00:02:11.379 mldev: explicitly disabled via build config 00:02:11.379 rib: explicitly disabled via build config 00:02:11.379 sched: explicitly disabled via build config 00:02:11.379 stack: explicitly disabled via build config 00:02:11.379 ipsec: explicitly disabled via build config 00:02:11.379 pdcp: explicitly disabled via build config 00:02:11.379 fib: explicitly disabled via build config 00:02:11.379 port: explicitly disabled via build config 00:02:11.379 pdump: explicitly disabled via build config 00:02:11.379 table: explicitly disabled via build config 00:02:11.379 pipeline: explicitly disabled via build config 00:02:11.379 graph: explicitly disabled via build config 00:02:11.379 node: explicitly disabled via build config 00:02:11.379 00:02:11.379 drivers: 00:02:11.379 common/cpt: not in enabled drivers build config 00:02:11.379 common/dpaax: not in enabled drivers build config 00:02:11.379 common/iavf: not in enabled drivers build config 00:02:11.379 common/idpf: not in enabled drivers build config 00:02:11.379 common/ionic: not in enabled drivers build config 00:02:11.379 common/mvep: not in enabled drivers build config 00:02:11.379 common/octeontx: not in enabled drivers build config 00:02:11.379 bus/auxiliary: not in enabled drivers build config 00:02:11.379 bus/cdx: not in enabled drivers build config 00:02:11.379 bus/dpaa: not in enabled drivers build config 00:02:11.379 bus/fslmc: not in enabled drivers build config 00:02:11.379 bus/ifpga: not in enabled drivers build config 00:02:11.379 bus/platform: not in enabled drivers build config 00:02:11.379 bus/uacce: not in enabled drivers build config 00:02:11.379 bus/vmbus: not in enabled drivers build config 00:02:11.379 common/cnxk: not in enabled drivers build config 00:02:11.379 common/mlx5: not in enabled drivers build config 00:02:11.379 common/nfp: not in enabled drivers build config 00:02:11.379 common/nitrox: not in enabled drivers build config 00:02:11.379 common/qat: not in enabled drivers build config 00:02:11.379 common/sfc_efx: not in enabled drivers build config 00:02:11.379 mempool/bucket: not in enabled drivers build config 00:02:11.379 mempool/cnxk: not in enabled drivers build config 00:02:11.379 mempool/dpaa: not in enabled drivers build config 00:02:11.379 mempool/dpaa2: not in enabled drivers build config 00:02:11.379 mempool/octeontx: not in enabled drivers build config 00:02:11.379 mempool/stack: not in enabled drivers build config 00:02:11.379 dma/cnxk: not in enabled drivers build config 00:02:11.379 dma/dpaa: not in enabled drivers build config 00:02:11.379 dma/dpaa2: not in enabled drivers build config 00:02:11.379 dma/hisilicon: not in enabled drivers build config 00:02:11.379 dma/idxd: not in enabled drivers build config 00:02:11.379 dma/ioat: not in enabled drivers build config 00:02:11.379 dma/skeleton: not in enabled drivers build config 00:02:11.379 net/af_packet: not in enabled drivers build config 00:02:11.379 net/af_xdp: not in enabled drivers build config 00:02:11.379 net/ark: not in enabled drivers build config 00:02:11.379 net/atlantic: not in enabled drivers build config 00:02:11.379 net/avp: not in enabled drivers build config 00:02:11.379 net/axgbe: not in enabled drivers build config 00:02:11.379 net/bnx2x: not in enabled drivers build config 00:02:11.379 net/bnxt: not in enabled drivers build config 00:02:11.379 net/bonding: not in enabled drivers build config 00:02:11.379 net/cnxk: not in enabled drivers build config 00:02:11.379 net/cpfl: not in enabled drivers build config 00:02:11.379 net/cxgbe: not in enabled drivers build config 00:02:11.379 net/dpaa: not in enabled drivers build config 00:02:11.379 net/dpaa2: not in enabled drivers build config 00:02:11.379 net/e1000: not in enabled drivers build config 00:02:11.379 net/ena: not in enabled drivers build config 00:02:11.379 net/enetc: not in enabled drivers build config 00:02:11.379 net/enetfec: not in enabled drivers build config 00:02:11.379 net/enic: not in enabled drivers build config 00:02:11.379 net/failsafe: not in enabled drivers build config 00:02:11.379 net/fm10k: not in enabled drivers build config 00:02:11.379 net/gve: not in enabled drivers build config 00:02:11.379 net/hinic: not in enabled drivers build config 00:02:11.379 net/hns3: not in enabled drivers build config 00:02:11.379 net/i40e: not in enabled drivers build config 00:02:11.379 net/iavf: not in enabled drivers build config 00:02:11.379 net/ice: not in enabled drivers build config 00:02:11.379 net/idpf: not in enabled drivers build config 00:02:11.379 net/igc: not in enabled drivers build config 00:02:11.379 net/ionic: not in enabled drivers build config 00:02:11.379 net/ipn3ke: not in enabled drivers build config 00:02:11.379 net/ixgbe: not in enabled drivers build config 00:02:11.379 net/mana: not in enabled drivers build config 00:02:11.379 net/memif: not in enabled drivers build config 00:02:11.379 net/mlx4: not in enabled drivers build config 00:02:11.379 net/mlx5: not in enabled drivers build config 00:02:11.379 net/mvneta: not in enabled drivers build config 00:02:11.379 net/mvpp2: not in enabled drivers build config 00:02:11.379 net/netvsc: not in enabled drivers build config 00:02:11.379 net/nfb: not in enabled drivers build config 00:02:11.379 net/nfp: not in enabled drivers build config 00:02:11.379 net/ngbe: not in enabled drivers build config 00:02:11.379 net/null: not in enabled drivers build config 00:02:11.379 net/octeontx: not in enabled drivers build config 00:02:11.379 net/octeon_ep: not in enabled drivers build config 00:02:11.379 net/pcap: not in enabled drivers build config 00:02:11.379 net/pfe: not in enabled drivers build config 00:02:11.379 net/qede: not in enabled drivers build config 00:02:11.379 net/ring: not in enabled drivers build config 00:02:11.379 net/sfc: not in enabled drivers build config 00:02:11.379 net/softnic: not in enabled drivers build config 00:02:11.379 net/tap: not in enabled drivers build config 00:02:11.379 net/thunderx: not in enabled drivers build config 00:02:11.379 net/txgbe: not in enabled drivers build config 00:02:11.379 net/vdev_netvsc: not in enabled drivers build config 00:02:11.379 net/vhost: not in enabled drivers build config 00:02:11.379 net/virtio: not in enabled drivers build config 00:02:11.379 net/vmxnet3: not in enabled drivers build config 00:02:11.379 raw/*: missing internal dependency, "rawdev" 00:02:11.379 crypto/armv8: not in enabled drivers build config 00:02:11.379 crypto/bcmfs: not in enabled drivers build config 00:02:11.379 crypto/caam_jr: not in enabled drivers build config 00:02:11.379 crypto/ccp: not in enabled drivers build config 00:02:11.379 crypto/cnxk: not in enabled drivers build config 00:02:11.379 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.379 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.379 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.379 crypto/mlx5: not in enabled drivers build config 00:02:11.379 crypto/mvsam: not in enabled drivers build config 00:02:11.380 crypto/nitrox: not in enabled drivers build config 00:02:11.380 crypto/null: not in enabled drivers build config 00:02:11.380 crypto/octeontx: not in enabled drivers build config 00:02:11.380 crypto/openssl: not in enabled drivers build config 00:02:11.380 crypto/scheduler: not in enabled drivers build config 00:02:11.380 crypto/uadk: not in enabled drivers build config 00:02:11.380 crypto/virtio: not in enabled drivers build config 00:02:11.380 compress/isal: not in enabled drivers build config 00:02:11.380 compress/mlx5: not in enabled drivers build config 00:02:11.380 compress/nitrox: not in enabled drivers build config 00:02:11.380 compress/octeontx: not in enabled drivers build config 00:02:11.380 compress/zlib: not in enabled drivers build config 00:02:11.380 regex/*: missing internal dependency, "regexdev" 00:02:11.380 ml/*: missing internal dependency, "mldev" 00:02:11.380 vdpa/ifc: not in enabled drivers build config 00:02:11.380 vdpa/mlx5: not in enabled drivers build config 00:02:11.380 vdpa/nfp: not in enabled drivers build config 00:02:11.380 vdpa/sfc: not in enabled drivers build config 00:02:11.380 event/*: missing internal dependency, "eventdev" 00:02:11.380 baseband/*: missing internal dependency, "bbdev" 00:02:11.380 gpu/*: missing internal dependency, "gpudev" 00:02:11.380 00:02:11.380 00:02:11.380 Build targets in project: 85 00:02:11.380 00:02:11.380 DPDK 24.03.0 00:02:11.380 00:02:11.380 User defined options 00:02:11.380 buildtype : debug 00:02:11.380 default_library : shared 00:02:11.380 libdir : lib 00:02:11.380 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.380 b_sanitize : address 00:02:11.380 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:11.380 c_link_args : 00:02:11.380 cpu_instruction_set: native 00:02:11.380 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:11.380 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:11.380 enable_docs : false 00:02:11.380 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:11.380 enable_kmods : false 00:02:11.380 max_lcores : 128 00:02:11.380 tests : false 00:02:11.380 00:02:11.380 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.639 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:11.639 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:11.639 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.898 [3/268] Linking static target lib/librte_kvargs.a 00:02:11.898 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:11.898 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.898 [6/268] Linking static target lib/librte_log.a 00:02:12.156 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.156 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.156 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.156 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.156 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.414 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.414 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.414 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:12.414 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:12.414 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.414 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:12.414 [18/268] Linking static target lib/librte_telemetry.a 00:02:12.981 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.981 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:12.981 [21/268] Linking target lib/librte_log.so.24.1 00:02:12.981 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.981 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.981 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.981 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.981 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:12.981 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.238 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:13.238 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.238 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.238 [31/268] Linking target lib/librte_kvargs.so.24.1 00:02:13.238 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.497 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.497 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:13.497 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:13.497 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:13.756 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.756 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.756 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.756 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.756 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:13.756 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.756 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.756 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.756 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.014 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.014 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.014 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.272 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.531 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.531 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.531 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.531 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.531 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.531 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.790 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.790 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.790 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.049 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.049 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.049 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.049 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.049 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.049 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.049 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.308 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.308 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.308 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.568 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.568 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.568 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.827 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.827 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.827 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.827 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.827 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.085 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.085 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.085 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.085 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.085 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.344 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.344 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.604 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.604 [85/268] Linking static target lib/librte_ring.a 00:02:16.604 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.604 [87/268] Linking static target lib/librte_eal.a 00:02:16.864 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.864 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.864 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.864 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.864 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.864 [93/268] Linking static target lib/librte_mempool.a 00:02:16.864 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.864 [95/268] Linking static target lib/librte_rcu.a 00:02:16.864 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.124 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.124 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.382 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.382 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.382 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.382 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.382 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.667 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.667 [105/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.667 [106/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.667 [107/268] Linking static target lib/librte_net.a 00:02:17.667 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:17.667 [109/268] Linking static target lib/librte_meter.a 00:02:17.926 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.926 [111/268] Linking static target lib/librte_mbuf.a 00:02:17.926 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.187 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.187 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.187 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.187 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.187 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.187 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.446 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.704 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.704 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.962 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.962 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.962 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.222 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.222 [126/268] Linking static target lib/librte_pci.a 00:02:19.222 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.222 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.483 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.483 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.483 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.483 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.483 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.483 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.483 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:19.483 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.483 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.483 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.483 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.743 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.743 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.743 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.743 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.743 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:20.002 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.002 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.002 [147/268] Linking static target lib/librte_cmdline.a 00:02:20.262 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.262 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.262 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:20.262 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.262 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.523 [153/268] Linking static target lib/librte_timer.a 00:02:20.523 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.783 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.783 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.783 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.783 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:21.042 [159/268] Linking static target lib/librte_compressdev.a 00:02:21.042 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.042 [161/268] Linking static target lib/librte_hash.a 00:02:21.042 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.042 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.303 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.303 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.303 [166/268] Linking static target lib/librte_ethdev.a 00:02:21.303 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.303 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.563 [169/268] Linking static target lib/librte_dmadev.a 00:02:21.563 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.563 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.822 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.822 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.822 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.822 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.083 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.083 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.083 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.345 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.345 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.345 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.345 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.345 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.345 [184/268] Linking static target lib/librte_power.a 00:02:22.605 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.605 [186/268] Linking static target lib/librte_cryptodev.a 00:02:22.865 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.865 [188/268] Linking static target lib/librte_reorder.a 00:02:22.865 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.125 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:23.125 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:23.125 [192/268] Linking static target lib/librte_security.a 00:02:23.125 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:23.384 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.644 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.644 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.902 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.902 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.902 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.160 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:24.160 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:24.420 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.420 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.420 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.420 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.679 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.938 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.938 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.938 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.938 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:25.225 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.225 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:25.225 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:25.225 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:25.225 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:25.225 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.225 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:25.225 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.225 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.225 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.225 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:25.484 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.484 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.484 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.484 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.484 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.743 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.681 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.589 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.589 [230/268] Linking target lib/librte_eal.so.24.1 00:02:28.589 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:28.589 [232/268] Linking target lib/librte_ring.so.24.1 00:02:28.589 [233/268] Linking target lib/librte_timer.so.24.1 00:02:28.589 [234/268] Linking target lib/librte_meter.so.24.1 00:02:28.589 [235/268] Linking target lib/librte_pci.so.24.1 00:02:28.589 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:28.589 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:28.849 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:28.849 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:28.849 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:28.849 [241/268] Linking target lib/librte_mempool.so.24.1 00:02:28.849 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:28.849 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:28.849 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:28.849 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:29.109 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:29.109 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:29.109 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:29.109 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:29.109 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:29.368 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:29.368 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:29.368 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:29.368 [254/268] Linking target lib/librte_net.so.24.1 00:02:29.368 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:29.368 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:29.368 [257/268] Linking target lib/librte_hash.so.24.1 00:02:29.368 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:29.368 [259/268] Linking target lib/librte_security.so.24.1 00:02:29.628 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:31.006 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.006 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:31.265 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:31.265 [264/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:31.265 [265/268] Linking target lib/librte_power.so.24.1 00:02:31.265 [266/268] Linking static target lib/librte_vhost.a 00:02:33.801 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.801 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:33.801 INFO: autodetecting backend as ninja 00:02:33.801 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:55.757 CC lib/log/log_flags.o 00:02:55.757 CC lib/log/log.o 00:02:55.757 CC lib/log/log_deprecated.o 00:02:55.757 CC lib/ut/ut.o 00:02:55.757 CC lib/ut_mock/mock.o 00:02:55.757 LIB libspdk_log.a 00:02:55.757 LIB libspdk_ut_mock.a 00:02:55.757 LIB libspdk_ut.a 00:02:55.757 SO libspdk_log.so.7.1 00:02:55.757 SO libspdk_ut_mock.so.6.0 00:02:55.757 SO libspdk_ut.so.2.0 00:02:55.757 SYMLINK libspdk_ut_mock.so 00:02:55.757 SYMLINK libspdk_ut.so 00:02:55.757 SYMLINK libspdk_log.so 00:02:55.757 CXX lib/trace_parser/trace.o 00:02:55.757 CC lib/util/base64.o 00:02:55.757 CC lib/util/bit_array.o 00:02:55.757 CC lib/util/cpuset.o 00:02:55.757 CC lib/util/crc16.o 00:02:55.757 CC lib/util/crc32.o 00:02:55.757 CC lib/util/crc32c.o 00:02:55.757 CC lib/ioat/ioat.o 00:02:55.757 CC lib/dma/dma.o 00:02:55.757 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.757 CC lib/util/crc32_ieee.o 00:02:55.757 CC lib/util/crc64.o 00:02:55.757 CC lib/util/dif.o 00:02:55.757 CC lib/vfio_user/host/vfio_user.o 00:02:55.757 CC lib/util/fd.o 00:02:55.757 LIB libspdk_dma.a 00:02:55.757 CC lib/util/fd_group.o 00:02:55.757 SO libspdk_dma.so.5.0 00:02:55.757 CC lib/util/file.o 00:02:55.757 CC lib/util/hexlify.o 00:02:55.757 LIB libspdk_ioat.a 00:02:55.757 SYMLINK libspdk_dma.so 00:02:55.757 CC lib/util/iov.o 00:02:55.757 SO libspdk_ioat.so.7.0 00:02:55.757 CC lib/util/math.o 00:02:55.757 CC lib/util/net.o 00:02:55.757 LIB libspdk_vfio_user.a 00:02:55.757 CC lib/util/pipe.o 00:02:55.757 SYMLINK libspdk_ioat.so 00:02:55.757 CC lib/util/strerror_tls.o 00:02:55.757 CC lib/util/string.o 00:02:55.757 SO libspdk_vfio_user.so.5.0 00:02:55.757 SYMLINK libspdk_vfio_user.so 00:02:55.757 CC lib/util/uuid.o 00:02:55.757 CC lib/util/xor.o 00:02:55.757 CC lib/util/zipf.o 00:02:55.757 CC lib/util/md5.o 00:02:55.757 LIB libspdk_util.a 00:02:55.757 SO libspdk_util.so.10.1 00:02:55.757 LIB libspdk_trace_parser.a 00:02:55.757 SO libspdk_trace_parser.so.6.0 00:02:55.757 SYMLINK libspdk_util.so 00:02:55.757 SYMLINK libspdk_trace_parser.so 00:02:55.757 CC lib/conf/conf.o 00:02:55.757 CC lib/vmd/vmd.o 00:02:55.757 CC lib/env_dpdk/env.o 00:02:55.757 CC lib/vmd/led.o 00:02:55.757 CC lib/env_dpdk/memory.o 00:02:55.757 CC lib/env_dpdk/pci.o 00:02:55.757 CC lib/env_dpdk/init.o 00:02:55.757 CC lib/idxd/idxd.o 00:02:55.757 CC lib/rdma_utils/rdma_utils.o 00:02:55.757 CC lib/json/json_parse.o 00:02:55.757 CC lib/json/json_util.o 00:02:55.757 LIB libspdk_conf.a 00:02:55.757 SO libspdk_conf.so.6.0 00:02:55.757 CC lib/idxd/idxd_user.o 00:02:55.757 LIB libspdk_rdma_utils.a 00:02:55.757 SO libspdk_rdma_utils.so.1.0 00:02:55.757 SYMLINK libspdk_conf.so 00:02:55.757 CC lib/idxd/idxd_kernel.o 00:02:56.017 SYMLINK libspdk_rdma_utils.so 00:02:56.017 CC lib/env_dpdk/threads.o 00:02:56.017 CC lib/env_dpdk/pci_ioat.o 00:02:56.017 CC lib/env_dpdk/pci_virtio.o 00:02:56.017 CC lib/json/json_write.o 00:02:56.017 CC lib/env_dpdk/pci_vmd.o 00:02:56.017 CC lib/env_dpdk/pci_idxd.o 00:02:56.017 CC lib/env_dpdk/pci_event.o 00:02:56.017 CC lib/env_dpdk/sigbus_handler.o 00:02:56.017 CC lib/env_dpdk/pci_dpdk.o 00:02:56.278 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:56.278 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:56.278 LIB libspdk_idxd.a 00:02:56.278 LIB libspdk_json.a 00:02:56.278 SO libspdk_idxd.so.12.1 00:02:56.278 SO libspdk_json.so.6.0 00:02:56.278 LIB libspdk_vmd.a 00:02:56.278 CC lib/rdma_provider/common.o 00:02:56.278 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:56.278 SYMLINK libspdk_idxd.so 00:02:56.278 SYMLINK libspdk_json.so 00:02:56.278 SO libspdk_vmd.so.6.0 00:02:56.605 SYMLINK libspdk_vmd.so 00:02:56.605 LIB libspdk_rdma_provider.a 00:02:56.605 SO libspdk_rdma_provider.so.7.0 00:02:56.605 CC lib/jsonrpc/jsonrpc_server.o 00:02:56.605 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:56.605 CC lib/jsonrpc/jsonrpc_client.o 00:02:56.605 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:56.863 SYMLINK libspdk_rdma_provider.so 00:02:57.122 LIB libspdk_jsonrpc.a 00:02:57.122 SO libspdk_jsonrpc.so.6.0 00:02:57.122 SYMLINK libspdk_jsonrpc.so 00:02:57.381 LIB libspdk_env_dpdk.a 00:02:57.381 SO libspdk_env_dpdk.so.15.1 00:02:57.640 CC lib/rpc/rpc.o 00:02:57.640 SYMLINK libspdk_env_dpdk.so 00:02:57.899 LIB libspdk_rpc.a 00:02:57.899 SO libspdk_rpc.so.6.0 00:02:57.899 SYMLINK libspdk_rpc.so 00:02:58.159 CC lib/notify/notify.o 00:02:58.159 CC lib/notify/notify_rpc.o 00:02:58.159 CC lib/keyring/keyring.o 00:02:58.159 CC lib/keyring/keyring_rpc.o 00:02:58.159 CC lib/trace/trace.o 00:02:58.159 CC lib/trace/trace_flags.o 00:02:58.159 CC lib/trace/trace_rpc.o 00:02:58.419 LIB libspdk_notify.a 00:02:58.419 SO libspdk_notify.so.6.0 00:02:58.419 SYMLINK libspdk_notify.so 00:02:58.419 LIB libspdk_keyring.a 00:02:58.678 LIB libspdk_trace.a 00:02:58.678 SO libspdk_keyring.so.2.0 00:02:58.678 SO libspdk_trace.so.11.0 00:02:58.678 SYMLINK libspdk_keyring.so 00:02:58.678 SYMLINK libspdk_trace.so 00:02:59.247 CC lib/thread/thread.o 00:02:59.247 CC lib/thread/iobuf.o 00:02:59.247 CC lib/sock/sock.o 00:02:59.247 CC lib/sock/sock_rpc.o 00:02:59.506 LIB libspdk_sock.a 00:02:59.766 SO libspdk_sock.so.10.0 00:02:59.766 SYMLINK libspdk_sock.so 00:03:00.024 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:00.024 CC lib/nvme/nvme_ctrlr.o 00:03:00.024 CC lib/nvme/nvme_fabric.o 00:03:00.024 CC lib/nvme/nvme_pcie_common.o 00:03:00.024 CC lib/nvme/nvme_ns_cmd.o 00:03:00.024 CC lib/nvme/nvme_ns.o 00:03:00.024 CC lib/nvme/nvme_pcie.o 00:03:00.024 CC lib/nvme/nvme_qpair.o 00:03:00.024 CC lib/nvme/nvme.o 00:03:00.959 CC lib/nvme/nvme_quirks.o 00:03:00.960 CC lib/nvme/nvme_transport.o 00:03:00.960 LIB libspdk_thread.a 00:03:00.960 SO libspdk_thread.so.11.0 00:03:00.960 CC lib/nvme/nvme_discovery.o 00:03:00.960 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:00.960 SYMLINK libspdk_thread.so 00:03:00.960 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:01.218 CC lib/nvme/nvme_tcp.o 00:03:01.218 CC lib/accel/accel.o 00:03:01.477 CC lib/blob/blobstore.o 00:03:01.477 CC lib/nvme/nvme_opal.o 00:03:01.477 CC lib/init/json_config.o 00:03:01.735 CC lib/init/subsystem.o 00:03:01.735 CC lib/init/subsystem_rpc.o 00:03:01.735 CC lib/accel/accel_rpc.o 00:03:01.735 CC lib/init/rpc.o 00:03:01.735 CC lib/blob/request.o 00:03:01.997 LIB libspdk_init.a 00:03:01.997 CC lib/nvme/nvme_io_msg.o 00:03:01.997 SO libspdk_init.so.6.0 00:03:01.997 CC lib/fsdev/fsdev.o 00:03:01.997 SYMLINK libspdk_init.so 00:03:01.997 CC lib/fsdev/fsdev_io.o 00:03:01.997 CC lib/virtio/virtio.o 00:03:02.262 CC lib/fsdev/fsdev_rpc.o 00:03:02.262 CC lib/virtio/virtio_vhost_user.o 00:03:02.262 CC lib/blob/zeroes.o 00:03:02.522 CC lib/blob/blob_bs_dev.o 00:03:02.522 CC lib/nvme/nvme_poll_group.o 00:03:02.522 CC lib/nvme/nvme_zns.o 00:03:02.522 CC lib/nvme/nvme_stubs.o 00:03:02.780 CC lib/accel/accel_sw.o 00:03:02.780 CC lib/virtio/virtio_vfio_user.o 00:03:02.780 LIB libspdk_fsdev.a 00:03:02.780 SO libspdk_fsdev.so.2.0 00:03:02.780 CC lib/event/app.o 00:03:03.038 SYMLINK libspdk_fsdev.so 00:03:03.038 CC lib/event/reactor.o 00:03:03.038 CC lib/virtio/virtio_pci.o 00:03:03.038 LIB libspdk_accel.a 00:03:03.038 CC lib/nvme/nvme_auth.o 00:03:03.038 CC lib/event/log_rpc.o 00:03:03.038 SO libspdk_accel.so.16.0 00:03:03.038 CC lib/event/app_rpc.o 00:03:03.297 CC lib/event/scheduler_static.o 00:03:03.297 SYMLINK libspdk_accel.so 00:03:03.297 CC lib/nvme/nvme_cuse.o 00:03:03.297 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:03.297 LIB libspdk_virtio.a 00:03:03.297 CC lib/nvme/nvme_rdma.o 00:03:03.297 SO libspdk_virtio.so.7.0 00:03:03.297 CC lib/bdev/bdev.o 00:03:03.555 SYMLINK libspdk_virtio.so 00:03:03.555 CC lib/bdev/bdev_rpc.o 00:03:03.555 CC lib/bdev/bdev_zone.o 00:03:03.555 CC lib/bdev/part.o 00:03:03.555 LIB libspdk_event.a 00:03:03.555 SO libspdk_event.so.14.0 00:03:03.555 SYMLINK libspdk_event.so 00:03:03.555 CC lib/bdev/scsi_nvme.o 00:03:04.121 LIB libspdk_fuse_dispatcher.a 00:03:04.121 SO libspdk_fuse_dispatcher.so.1.0 00:03:04.121 SYMLINK libspdk_fuse_dispatcher.so 00:03:05.060 LIB libspdk_nvme.a 00:03:05.060 SO libspdk_nvme.so.15.0 00:03:05.318 SYMLINK libspdk_nvme.so 00:03:05.885 LIB libspdk_blob.a 00:03:05.885 SO libspdk_blob.so.11.0 00:03:06.144 SYMLINK libspdk_blob.so 00:03:06.404 CC lib/blobfs/blobfs.o 00:03:06.404 CC lib/blobfs/tree.o 00:03:06.404 CC lib/lvol/lvol.o 00:03:06.664 LIB libspdk_bdev.a 00:03:06.664 SO libspdk_bdev.so.17.0 00:03:06.923 SYMLINK libspdk_bdev.so 00:03:07.182 CC lib/nbd/nbd_rpc.o 00:03:07.182 CC lib/nbd/nbd.o 00:03:07.182 CC lib/ublk/ublk.o 00:03:07.182 CC lib/ublk/ublk_rpc.o 00:03:07.182 CC lib/scsi/dev.o 00:03:07.182 CC lib/scsi/lun.o 00:03:07.182 CC lib/ftl/ftl_core.o 00:03:07.182 CC lib/nvmf/ctrlr.o 00:03:07.441 LIB libspdk_blobfs.a 00:03:07.441 CC lib/nvmf/ctrlr_discovery.o 00:03:07.441 SO libspdk_blobfs.so.10.0 00:03:07.441 CC lib/scsi/port.o 00:03:07.441 CC lib/scsi/scsi.o 00:03:07.441 SYMLINK libspdk_blobfs.so 00:03:07.441 CC lib/scsi/scsi_bdev.o 00:03:07.441 LIB libspdk_lvol.a 00:03:07.441 CC lib/nvmf/ctrlr_bdev.o 00:03:07.441 SO libspdk_lvol.so.10.0 00:03:07.441 CC lib/scsi/scsi_pr.o 00:03:07.699 CC lib/scsi/scsi_rpc.o 00:03:07.699 CC lib/ftl/ftl_init.o 00:03:07.699 SYMLINK libspdk_lvol.so 00:03:07.699 CC lib/nvmf/subsystem.o 00:03:07.699 LIB libspdk_nbd.a 00:03:07.699 SO libspdk_nbd.so.7.0 00:03:07.699 CC lib/ftl/ftl_layout.o 00:03:07.699 SYMLINK libspdk_nbd.so 00:03:07.699 CC lib/ftl/ftl_debug.o 00:03:07.958 CC lib/nvmf/nvmf.o 00:03:07.958 LIB libspdk_ublk.a 00:03:07.958 CC lib/nvmf/nvmf_rpc.o 00:03:07.958 CC lib/nvmf/transport.o 00:03:07.958 SO libspdk_ublk.so.3.0 00:03:07.958 SYMLINK libspdk_ublk.so 00:03:07.958 CC lib/ftl/ftl_io.o 00:03:07.958 CC lib/ftl/ftl_sb.o 00:03:08.216 CC lib/nvmf/tcp.o 00:03:08.216 CC lib/scsi/task.o 00:03:08.216 CC lib/ftl/ftl_l2p.o 00:03:08.475 CC lib/ftl/ftl_l2p_flat.o 00:03:08.475 LIB libspdk_scsi.a 00:03:08.475 SO libspdk_scsi.so.9.0 00:03:08.475 CC lib/nvmf/stubs.o 00:03:08.475 CC lib/nvmf/mdns_server.o 00:03:08.475 SYMLINK libspdk_scsi.so 00:03:08.475 CC lib/nvmf/rdma.o 00:03:08.732 CC lib/ftl/ftl_nv_cache.o 00:03:08.732 CC lib/nvmf/auth.o 00:03:08.990 CC lib/ftl/ftl_band.o 00:03:08.990 CC lib/ftl/ftl_band_ops.o 00:03:08.990 CC lib/ftl/ftl_writer.o 00:03:08.990 CC lib/ftl/ftl_rq.o 00:03:09.248 CC lib/ftl/ftl_reloc.o 00:03:09.248 CC lib/ftl/ftl_l2p_cache.o 00:03:09.248 CC lib/ftl/ftl_p2l.o 00:03:09.248 CC lib/ftl/ftl_p2l_log.o 00:03:09.248 CC lib/ftl/mngt/ftl_mngt.o 00:03:09.506 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:09.763 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:09.763 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:09.763 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:09.763 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:09.763 CC lib/iscsi/conn.o 00:03:09.763 CC lib/vhost/vhost.o 00:03:09.763 CC lib/vhost/vhost_rpc.o 00:03:10.021 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:10.021 CC lib/iscsi/init_grp.o 00:03:10.021 CC lib/iscsi/iscsi.o 00:03:10.021 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.021 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:10.279 CC lib/vhost/vhost_scsi.o 00:03:10.279 CC lib/vhost/vhost_blk.o 00:03:10.279 CC lib/iscsi/param.o 00:03:10.536 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.536 CC lib/vhost/rte_vhost_user.o 00:03:10.536 CC lib/iscsi/portal_grp.o 00:03:10.536 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.805 CC lib/iscsi/tgt_node.o 00:03:10.805 CC lib/iscsi/iscsi_subsystem.o 00:03:10.805 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:10.805 CC lib/iscsi/iscsi_rpc.o 00:03:10.806 CC lib/iscsi/task.o 00:03:11.076 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:11.076 CC lib/ftl/utils/ftl_conf.o 00:03:11.076 CC lib/ftl/utils/ftl_md.o 00:03:11.076 CC lib/ftl/utils/ftl_mempool.o 00:03:11.333 CC lib/ftl/utils/ftl_bitmap.o 00:03:11.333 CC lib/ftl/utils/ftl_property.o 00:03:11.333 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:11.333 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:11.333 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:11.333 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:11.333 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:11.591 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:11.591 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:11.591 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:11.591 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:11.591 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:11.591 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:11.591 LIB libspdk_iscsi.a 00:03:11.848 LIB libspdk_nvmf.a 00:03:11.848 LIB libspdk_vhost.a 00:03:11.848 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:11.848 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:11.848 CC lib/ftl/base/ftl_base_dev.o 00:03:11.848 CC lib/ftl/base/ftl_base_bdev.o 00:03:11.848 SO libspdk_iscsi.so.8.0 00:03:11.849 CC lib/ftl/ftl_trace.o 00:03:11.849 SO libspdk_vhost.so.8.0 00:03:11.849 SO libspdk_nvmf.so.20.0 00:03:12.107 SYMLINK libspdk_vhost.so 00:03:12.107 SYMLINK libspdk_iscsi.so 00:03:12.107 SYMLINK libspdk_nvmf.so 00:03:12.107 LIB libspdk_ftl.a 00:03:12.366 SO libspdk_ftl.so.9.0 00:03:12.625 SYMLINK libspdk_ftl.so 00:03:13.193 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.193 CC module/keyring/file/keyring.o 00:03:13.193 CC module/keyring/linux/keyring.o 00:03:13.193 CC module/sock/posix/posix.o 00:03:13.193 CC module/accel/error/accel_error.o 00:03:13.193 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:13.193 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.193 CC module/accel/ioat/accel_ioat.o 00:03:13.193 CC module/fsdev/aio/fsdev_aio.o 00:03:13.193 CC module/blob/bdev/blob_bdev.o 00:03:13.193 LIB libspdk_env_dpdk_rpc.a 00:03:13.193 SO libspdk_env_dpdk_rpc.so.6.0 00:03:13.193 SYMLINK libspdk_env_dpdk_rpc.so 00:03:13.193 CC module/accel/ioat/accel_ioat_rpc.o 00:03:13.193 CC module/keyring/linux/keyring_rpc.o 00:03:13.193 CC module/keyring/file/keyring_rpc.o 00:03:13.193 LIB libspdk_scheduler_dpdk_governor.a 00:03:13.452 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:13.453 CC module/accel/error/accel_error_rpc.o 00:03:13.453 LIB libspdk_scheduler_dynamic.a 00:03:13.453 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:13.453 SO libspdk_scheduler_dynamic.so.4.0 00:03:13.453 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:13.453 CC module/fsdev/aio/linux_aio_mgr.o 00:03:13.453 LIB libspdk_keyring_linux.a 00:03:13.453 LIB libspdk_keyring_file.a 00:03:13.453 LIB libspdk_accel_ioat.a 00:03:13.453 SYMLINK libspdk_scheduler_dynamic.so 00:03:13.453 SO libspdk_keyring_linux.so.1.0 00:03:13.453 SO libspdk_keyring_file.so.2.0 00:03:13.453 SO libspdk_accel_ioat.so.6.0 00:03:13.453 LIB libspdk_blob_bdev.a 00:03:13.453 LIB libspdk_accel_error.a 00:03:13.453 SO libspdk_blob_bdev.so.11.0 00:03:13.453 SYMLINK libspdk_keyring_linux.so 00:03:13.453 SYMLINK libspdk_keyring_file.so 00:03:13.453 SYMLINK libspdk_accel_ioat.so 00:03:13.453 SO libspdk_accel_error.so.2.0 00:03:13.453 SYMLINK libspdk_blob_bdev.so 00:03:13.712 SYMLINK libspdk_accel_error.so 00:03:13.712 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.712 CC module/accel/dsa/accel_dsa.o 00:03:13.712 CC module/accel/iaa/accel_iaa.o 00:03:13.712 LIB libspdk_scheduler_gscheduler.a 00:03:13.712 CC module/bdev/gpt/gpt.o 00:03:13.712 CC module/bdev/delay/vbdev_delay.o 00:03:13.712 SO libspdk_scheduler_gscheduler.so.4.0 00:03:13.712 CC module/bdev/error/vbdev_error.o 00:03:13.712 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.712 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.969 SYMLINK libspdk_scheduler_gscheduler.so 00:03:13.969 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.969 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.969 LIB libspdk_fsdev_aio.a 00:03:13.969 SO libspdk_fsdev_aio.so.1.0 00:03:13.969 LIB libspdk_sock_posix.a 00:03:13.969 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.969 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.969 SO libspdk_sock_posix.so.6.0 00:03:13.969 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.969 SYMLINK libspdk_fsdev_aio.so 00:03:13.969 LIB libspdk_accel_iaa.a 00:03:13.969 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.969 SO libspdk_accel_iaa.so.3.0 00:03:13.969 SYMLINK libspdk_sock_posix.so 00:03:13.969 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.227 SYMLINK libspdk_accel_iaa.so 00:03:14.227 LIB libspdk_accel_dsa.a 00:03:14.227 LIB libspdk_blobfs_bdev.a 00:03:14.227 SO libspdk_accel_dsa.so.5.0 00:03:14.227 LIB libspdk_bdev_gpt.a 00:03:14.227 SO libspdk_blobfs_bdev.so.6.0 00:03:14.227 SO libspdk_bdev_gpt.so.6.0 00:03:14.227 LIB libspdk_bdev_error.a 00:03:14.227 SO libspdk_bdev_error.so.6.0 00:03:14.227 SYMLINK libspdk_accel_dsa.so 00:03:14.227 SYMLINK libspdk_blobfs_bdev.so 00:03:14.227 SYMLINK libspdk_bdev_gpt.so 00:03:14.227 LIB libspdk_bdev_delay.a 00:03:14.227 CC module/bdev/malloc/bdev_malloc.o 00:03:14.227 SYMLINK libspdk_bdev_error.so 00:03:14.227 CC module/bdev/null/bdev_null.o 00:03:14.227 SO libspdk_bdev_delay.so.6.0 00:03:14.492 SYMLINK libspdk_bdev_delay.so 00:03:14.492 CC module/bdev/nvme/bdev_nvme.o 00:03:14.492 CC module/bdev/passthru/vbdev_passthru.o 00:03:14.492 CC module/bdev/split/vbdev_split.o 00:03:14.492 CC module/bdev/raid/bdev_raid.o 00:03:14.492 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.492 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.492 LIB libspdk_bdev_lvol.a 00:03:14.492 SO libspdk_bdev_lvol.so.6.0 00:03:14.492 CC module/bdev/aio/bdev_aio.o 00:03:14.492 CC module/bdev/null/bdev_null_rpc.o 00:03:14.757 SYMLINK libspdk_bdev_lvol.so 00:03:14.757 CC module/bdev/raid/bdev_raid_sb.o 00:03:14.757 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.757 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.757 CC module/bdev/aio/bdev_aio_rpc.o 00:03:14.757 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.757 LIB libspdk_bdev_null.a 00:03:14.757 SO libspdk_bdev_null.so.6.0 00:03:14.757 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:14.757 LIB libspdk_bdev_malloc.a 00:03:14.757 LIB libspdk_bdev_split.a 00:03:15.015 SYMLINK libspdk_bdev_null.so 00:03:15.015 CC module/bdev/raid/raid0.o 00:03:15.015 CC module/bdev/raid/raid1.o 00:03:15.015 SO libspdk_bdev_malloc.so.6.0 00:03:15.015 SO libspdk_bdev_split.so.6.0 00:03:15.015 LIB libspdk_bdev_passthru.a 00:03:15.015 SO libspdk_bdev_passthru.so.6.0 00:03:15.015 SYMLINK libspdk_bdev_split.so 00:03:15.015 SYMLINK libspdk_bdev_malloc.so 00:03:15.015 LIB libspdk_bdev_aio.a 00:03:15.015 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.015 SO libspdk_bdev_aio.so.6.0 00:03:15.015 LIB libspdk_bdev_zone_block.a 00:03:15.015 SYMLINK libspdk_bdev_passthru.so 00:03:15.016 SO libspdk_bdev_zone_block.so.6.0 00:03:15.016 SYMLINK libspdk_bdev_aio.so 00:03:15.016 CC module/bdev/raid/concat.o 00:03:15.274 CC module/bdev/ftl/bdev_ftl.o 00:03:15.274 CC module/bdev/raid/raid5f.o 00:03:15.274 CC module/bdev/nvme/nvme_rpc.o 00:03:15.274 SYMLINK libspdk_bdev_zone_block.so 00:03:15.274 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.274 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.274 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.274 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:15.274 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:15.533 CC module/bdev/nvme/vbdev_opal.o 00:03:15.533 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.533 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.533 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.791 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:15.791 LIB libspdk_bdev_iscsi.a 00:03:15.791 LIB libspdk_bdev_ftl.a 00:03:15.791 SO libspdk_bdev_iscsi.so.6.0 00:03:15.791 LIB libspdk_bdev_raid.a 00:03:15.791 SO libspdk_bdev_ftl.so.6.0 00:03:15.791 SO libspdk_bdev_raid.so.6.0 00:03:15.791 SYMLINK libspdk_bdev_iscsi.so 00:03:15.791 LIB libspdk_bdev_virtio.a 00:03:15.791 SYMLINK libspdk_bdev_ftl.so 00:03:16.050 SO libspdk_bdev_virtio.so.6.0 00:03:16.050 SYMLINK libspdk_bdev_raid.so 00:03:16.050 SYMLINK libspdk_bdev_virtio.so 00:03:17.429 LIB libspdk_bdev_nvme.a 00:03:17.686 SO libspdk_bdev_nvme.so.7.1 00:03:17.686 SYMLINK libspdk_bdev_nvme.so 00:03:18.623 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:18.623 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.623 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.623 CC module/event/subsystems/keyring/keyring.o 00:03:18.623 CC module/event/subsystems/fsdev/fsdev.o 00:03:18.623 CC module/event/subsystems/sock/sock.o 00:03:18.623 CC module/event/subsystems/vmd/vmd.o 00:03:18.623 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.623 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.623 LIB libspdk_event_keyring.a 00:03:18.623 LIB libspdk_event_fsdev.a 00:03:18.623 LIB libspdk_event_vhost_blk.a 00:03:18.623 LIB libspdk_event_sock.a 00:03:18.623 LIB libspdk_event_iobuf.a 00:03:18.623 LIB libspdk_event_vmd.a 00:03:18.623 SO libspdk_event_vhost_blk.so.3.0 00:03:18.623 SO libspdk_event_fsdev.so.1.0 00:03:18.623 SO libspdk_event_keyring.so.1.0 00:03:18.623 LIB libspdk_event_scheduler.a 00:03:18.623 SO libspdk_event_sock.so.5.0 00:03:18.623 SO libspdk_event_iobuf.so.3.0 00:03:18.623 SO libspdk_event_vmd.so.6.0 00:03:18.623 SO libspdk_event_scheduler.so.4.0 00:03:18.623 SYMLINK libspdk_event_fsdev.so 00:03:18.623 SYMLINK libspdk_event_keyring.so 00:03:18.623 SYMLINK libspdk_event_vhost_blk.so 00:03:18.623 SYMLINK libspdk_event_sock.so 00:03:18.623 SYMLINK libspdk_event_iobuf.so 00:03:18.623 SYMLINK libspdk_event_vmd.so 00:03:18.623 SYMLINK libspdk_event_scheduler.so 00:03:19.201 CC module/event/subsystems/accel/accel.o 00:03:19.201 LIB libspdk_event_accel.a 00:03:19.201 SO libspdk_event_accel.so.6.0 00:03:19.478 SYMLINK libspdk_event_accel.so 00:03:19.737 CC module/event/subsystems/bdev/bdev.o 00:03:19.997 LIB libspdk_event_bdev.a 00:03:19.997 SO libspdk_event_bdev.so.6.0 00:03:19.997 SYMLINK libspdk_event_bdev.so 00:03:20.256 CC module/event/subsystems/nbd/nbd.o 00:03:20.256 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.256 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.515 CC module/event/subsystems/scsi/scsi.o 00:03:20.515 CC module/event/subsystems/ublk/ublk.o 00:03:20.515 LIB libspdk_event_nbd.a 00:03:20.515 LIB libspdk_event_ublk.a 00:03:20.515 LIB libspdk_event_scsi.a 00:03:20.515 SO libspdk_event_nbd.so.6.0 00:03:20.515 SO libspdk_event_scsi.so.6.0 00:03:20.515 SO libspdk_event_ublk.so.3.0 00:03:20.515 SYMLINK libspdk_event_nbd.so 00:03:20.515 LIB libspdk_event_nvmf.a 00:03:20.515 SYMLINK libspdk_event_scsi.so 00:03:20.773 SYMLINK libspdk_event_ublk.so 00:03:20.773 SO libspdk_event_nvmf.so.6.0 00:03:20.773 SYMLINK libspdk_event_nvmf.so 00:03:21.032 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:21.032 CC module/event/subsystems/iscsi/iscsi.o 00:03:21.032 LIB libspdk_event_vhost_scsi.a 00:03:21.292 LIB libspdk_event_iscsi.a 00:03:21.292 SO libspdk_event_vhost_scsi.so.3.0 00:03:21.292 SO libspdk_event_iscsi.so.6.0 00:03:21.292 SYMLINK libspdk_event_vhost_scsi.so 00:03:21.292 SYMLINK libspdk_event_iscsi.so 00:03:21.552 SO libspdk.so.6.0 00:03:21.552 SYMLINK libspdk.so 00:03:21.811 CC app/trace_record/trace_record.o 00:03:21.811 CC app/spdk_lspci/spdk_lspci.o 00:03:21.811 CC app/spdk_nvme_perf/perf.o 00:03:21.811 CXX app/trace/trace.o 00:03:21.811 CC app/iscsi_tgt/iscsi_tgt.o 00:03:21.811 CC app/nvmf_tgt/nvmf_main.o 00:03:21.811 CC app/spdk_tgt/spdk_tgt.o 00:03:21.811 CC examples/ioat/perf/perf.o 00:03:22.071 CC examples/util/zipf/zipf.o 00:03:22.071 CC test/thread/poller_perf/poller_perf.o 00:03:22.071 LINK spdk_lspci 00:03:22.071 LINK nvmf_tgt 00:03:22.071 LINK iscsi_tgt 00:03:22.071 LINK zipf 00:03:22.071 LINK spdk_tgt 00:03:22.071 LINK spdk_trace_record 00:03:22.071 LINK ioat_perf 00:03:22.071 LINK poller_perf 00:03:22.331 LINK spdk_trace 00:03:22.331 CC app/spdk_nvme_identify/identify.o 00:03:22.331 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.331 CC app/spdk_top/spdk_top.o 00:03:22.331 CC examples/ioat/verify/verify.o 00:03:22.590 TEST_HEADER include/spdk/accel.h 00:03:22.590 TEST_HEADER include/spdk/accel_module.h 00:03:22.590 TEST_HEADER include/spdk/assert.h 00:03:22.590 TEST_HEADER include/spdk/barrier.h 00:03:22.590 TEST_HEADER include/spdk/base64.h 00:03:22.590 TEST_HEADER include/spdk/bdev.h 00:03:22.590 TEST_HEADER include/spdk/bdev_module.h 00:03:22.590 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.590 TEST_HEADER include/spdk/bit_array.h 00:03:22.590 TEST_HEADER include/spdk/bit_pool.h 00:03:22.590 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.590 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.590 TEST_HEADER include/spdk/blobfs.h 00:03:22.590 TEST_HEADER include/spdk/blob.h 00:03:22.590 TEST_HEADER include/spdk/conf.h 00:03:22.590 TEST_HEADER include/spdk/config.h 00:03:22.590 TEST_HEADER include/spdk/cpuset.h 00:03:22.590 TEST_HEADER include/spdk/crc16.h 00:03:22.590 TEST_HEADER include/spdk/crc32.h 00:03:22.590 CC app/spdk_dd/spdk_dd.o 00:03:22.590 TEST_HEADER include/spdk/crc64.h 00:03:22.590 TEST_HEADER include/spdk/dif.h 00:03:22.590 TEST_HEADER include/spdk/dma.h 00:03:22.590 TEST_HEADER include/spdk/endian.h 00:03:22.591 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.591 TEST_HEADER include/spdk/env.h 00:03:22.591 TEST_HEADER include/spdk/event.h 00:03:22.591 TEST_HEADER include/spdk/fd_group.h 00:03:22.591 TEST_HEADER include/spdk/fd.h 00:03:22.591 TEST_HEADER include/spdk/file.h 00:03:22.591 TEST_HEADER include/spdk/fsdev.h 00:03:22.591 TEST_HEADER include/spdk/fsdev_module.h 00:03:22.591 TEST_HEADER include/spdk/ftl.h 00:03:22.591 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:22.591 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.591 TEST_HEADER include/spdk/hexlify.h 00:03:22.591 CC test/app/bdev_svc/bdev_svc.o 00:03:22.591 TEST_HEADER include/spdk/histogram_data.h 00:03:22.591 TEST_HEADER include/spdk/idxd.h 00:03:22.591 CC test/dma/test_dma/test_dma.o 00:03:22.591 TEST_HEADER include/spdk/idxd_spec.h 00:03:22.591 TEST_HEADER include/spdk/init.h 00:03:22.591 TEST_HEADER include/spdk/ioat.h 00:03:22.591 TEST_HEADER include/spdk/ioat_spec.h 00:03:22.591 TEST_HEADER include/spdk/iscsi_spec.h 00:03:22.591 TEST_HEADER include/spdk/json.h 00:03:22.591 TEST_HEADER include/spdk/jsonrpc.h 00:03:22.591 TEST_HEADER include/spdk/keyring.h 00:03:22.591 TEST_HEADER include/spdk/keyring_module.h 00:03:22.591 TEST_HEADER include/spdk/likely.h 00:03:22.591 TEST_HEADER include/spdk/log.h 00:03:22.591 TEST_HEADER include/spdk/lvol.h 00:03:22.591 TEST_HEADER include/spdk/md5.h 00:03:22.591 TEST_HEADER include/spdk/memory.h 00:03:22.591 TEST_HEADER include/spdk/mmio.h 00:03:22.591 TEST_HEADER include/spdk/nbd.h 00:03:22.591 TEST_HEADER include/spdk/net.h 00:03:22.591 TEST_HEADER include/spdk/notify.h 00:03:22.591 TEST_HEADER include/spdk/nvme.h 00:03:22.591 TEST_HEADER include/spdk/nvme_intel.h 00:03:22.591 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:22.591 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:22.591 TEST_HEADER include/spdk/nvme_spec.h 00:03:22.591 TEST_HEADER include/spdk/nvme_zns.h 00:03:22.591 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:22.591 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:22.591 TEST_HEADER include/spdk/nvmf.h 00:03:22.591 TEST_HEADER include/spdk/nvmf_spec.h 00:03:22.591 TEST_HEADER include/spdk/nvmf_transport.h 00:03:22.591 TEST_HEADER include/spdk/opal.h 00:03:22.591 TEST_HEADER include/spdk/opal_spec.h 00:03:22.591 TEST_HEADER include/spdk/pci_ids.h 00:03:22.591 TEST_HEADER include/spdk/pipe.h 00:03:22.591 TEST_HEADER include/spdk/queue.h 00:03:22.591 LINK spdk_nvme_discover 00:03:22.591 TEST_HEADER include/spdk/reduce.h 00:03:22.591 TEST_HEADER include/spdk/rpc.h 00:03:22.591 TEST_HEADER include/spdk/scheduler.h 00:03:22.591 TEST_HEADER include/spdk/scsi.h 00:03:22.591 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.591 TEST_HEADER include/spdk/sock.h 00:03:22.591 TEST_HEADER include/spdk/stdinc.h 00:03:22.591 TEST_HEADER include/spdk/string.h 00:03:22.591 TEST_HEADER include/spdk/thread.h 00:03:22.591 TEST_HEADER include/spdk/trace.h 00:03:22.591 TEST_HEADER include/spdk/trace_parser.h 00:03:22.591 TEST_HEADER include/spdk/tree.h 00:03:22.591 TEST_HEADER include/spdk/ublk.h 00:03:22.591 TEST_HEADER include/spdk/util.h 00:03:22.591 TEST_HEADER include/spdk/uuid.h 00:03:22.591 TEST_HEADER include/spdk/version.h 00:03:22.591 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.591 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.591 TEST_HEADER include/spdk/vhost.h 00:03:22.591 TEST_HEADER include/spdk/vmd.h 00:03:22.591 LINK verify 00:03:22.591 TEST_HEADER include/spdk/xor.h 00:03:22.591 TEST_HEADER include/spdk/zipf.h 00:03:22.591 CXX test/cpp_headers/accel.o 00:03:22.850 LINK bdev_svc 00:03:22.850 CC app/fio/nvme/fio_plugin.o 00:03:22.850 CXX test/cpp_headers/accel_module.o 00:03:22.850 LINK spdk_nvme_perf 00:03:22.850 LINK spdk_dd 00:03:23.110 CXX test/cpp_headers/assert.o 00:03:23.110 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:23.110 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:23.110 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:23.110 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:23.110 LINK test_dma 00:03:23.110 CXX test/cpp_headers/barrier.o 00:03:23.110 CXX test/cpp_headers/base64.o 00:03:23.110 LINK interrupt_tgt 00:03:23.370 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:23.370 CXX test/cpp_headers/bdev.o 00:03:23.370 LINK spdk_nvme_identify 00:03:23.370 CXX test/cpp_headers/bdev_module.o 00:03:23.370 CXX test/cpp_headers/bdev_zone.o 00:03:23.370 LINK spdk_nvme 00:03:23.629 LINK nvme_fuzz 00:03:23.629 LINK spdk_top 00:03:23.629 CC examples/thread/thread/thread_ex.o 00:03:23.629 CXX test/cpp_headers/bit_array.o 00:03:23.629 CC test/app/histogram_perf/histogram_perf.o 00:03:23.629 CC app/fio/bdev/fio_plugin.o 00:03:23.629 CC test/app/jsoncat/jsoncat.o 00:03:23.629 CC test/app/stub/stub.o 00:03:23.629 CXX test/cpp_headers/bit_pool.o 00:03:23.629 CXX test/cpp_headers/blob_bdev.o 00:03:23.891 LINK histogram_perf 00:03:23.891 LINK vhost_fuzz 00:03:23.891 LINK thread 00:03:23.891 LINK jsoncat 00:03:23.891 CXX test/cpp_headers/blobfs_bdev.o 00:03:23.891 CC examples/sock/hello_world/hello_sock.o 00:03:23.891 LINK stub 00:03:23.891 CXX test/cpp_headers/blobfs.o 00:03:23.891 CXX test/cpp_headers/blob.o 00:03:23.891 CXX test/cpp_headers/conf.o 00:03:24.149 CC examples/vmd/lsvmd/lsvmd.o 00:03:24.149 CC examples/vmd/led/led.o 00:03:24.149 CXX test/cpp_headers/config.o 00:03:24.149 LINK hello_sock 00:03:24.149 CXX test/cpp_headers/cpuset.o 00:03:24.149 LINK spdk_bdev 00:03:24.149 CC examples/idxd/perf/perf.o 00:03:24.408 LINK lsvmd 00:03:24.408 LINK led 00:03:24.408 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:24.408 CC examples/accel/perf/accel_perf.o 00:03:24.408 CXX test/cpp_headers/crc16.o 00:03:24.408 CC examples/blob/hello_world/hello_blob.o 00:03:24.408 CC app/vhost/vhost.o 00:03:24.408 CC examples/blob/cli/blobcli.o 00:03:24.408 CXX test/cpp_headers/crc32.o 00:03:24.666 CC examples/nvme/hello_world/hello_world.o 00:03:24.666 LINK idxd_perf 00:03:24.666 LINK hello_blob 00:03:24.666 LINK hello_fsdev 00:03:24.666 CXX test/cpp_headers/crc64.o 00:03:24.666 LINK vhost 00:03:24.666 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.666 CXX test/cpp_headers/dif.o 00:03:24.924 LINK hello_world 00:03:24.924 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.924 CC test/env/vtophys/vtophys.o 00:03:24.924 LINK accel_perf 00:03:24.924 CXX test/cpp_headers/dma.o 00:03:24.924 CC examples/nvme/reconnect/reconnect.o 00:03:24.924 CC test/env/memory/memory_ut.o 00:03:24.924 LINK blobcli 00:03:24.924 LINK vtophys 00:03:24.924 LINK env_dpdk_post_init 00:03:25.182 CXX test/cpp_headers/endian.o 00:03:25.182 CXX test/cpp_headers/env_dpdk.o 00:03:25.182 LINK iscsi_fuzz 00:03:25.182 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:25.182 CXX test/cpp_headers/env.o 00:03:25.182 CXX test/cpp_headers/event.o 00:03:25.182 CXX test/cpp_headers/fd_group.o 00:03:25.182 LINK reconnect 00:03:25.441 LINK mem_callbacks 00:03:25.441 CC examples/nvme/arbitration/arbitration.o 00:03:25.441 CC test/env/pci/pci_ut.o 00:03:25.441 CXX test/cpp_headers/fd.o 00:03:25.441 CC examples/nvme/hotplug/hotplug.o 00:03:25.441 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:25.441 CC examples/nvme/abort/abort.o 00:03:25.441 CXX test/cpp_headers/file.o 00:03:25.441 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:25.700 CXX test/cpp_headers/fsdev.o 00:03:25.700 LINK cmb_copy 00:03:25.700 LINK hotplug 00:03:25.700 LINK pmr_persistence 00:03:25.700 LINK arbitration 00:03:25.700 LINK nvme_manage 00:03:25.700 CC examples/bdev/hello_world/hello_bdev.o 00:03:25.700 CXX test/cpp_headers/fsdev_module.o 00:03:25.700 CXX test/cpp_headers/ftl.o 00:03:25.700 LINK pci_ut 00:03:25.959 CXX test/cpp_headers/fuse_dispatcher.o 00:03:25.959 LINK abort 00:03:25.959 CXX test/cpp_headers/gpt_spec.o 00:03:25.959 CXX test/cpp_headers/hexlify.o 00:03:25.959 LINK hello_bdev 00:03:25.959 CC examples/bdev/bdevperf/bdevperf.o 00:03:25.959 CXX test/cpp_headers/histogram_data.o 00:03:25.959 CXX test/cpp_headers/idxd.o 00:03:25.959 CXX test/cpp_headers/idxd_spec.o 00:03:26.218 CC test/event/event_perf/event_perf.o 00:03:26.218 CC test/nvme/aer/aer.o 00:03:26.218 CC test/nvme/reset/reset.o 00:03:26.218 CC test/event/reactor/reactor.o 00:03:26.218 LINK memory_ut 00:03:26.218 CXX test/cpp_headers/init.o 00:03:26.218 CC test/event/reactor_perf/reactor_perf.o 00:03:26.218 LINK event_perf 00:03:26.218 LINK reactor 00:03:26.218 CC test/event/app_repeat/app_repeat.o 00:03:26.218 CC test/event/scheduler/scheduler.o 00:03:26.478 CXX test/cpp_headers/ioat.o 00:03:26.478 CXX test/cpp_headers/ioat_spec.o 00:03:26.478 LINK reset 00:03:26.478 LINK reactor_perf 00:03:26.478 LINK aer 00:03:26.478 CC test/rpc_client/rpc_client_test.o 00:03:26.478 CXX test/cpp_headers/iscsi_spec.o 00:03:26.478 LINK app_repeat 00:03:26.478 LINK scheduler 00:03:26.478 CXX test/cpp_headers/json.o 00:03:26.737 LINK rpc_client_test 00:03:26.737 CC test/nvme/e2edp/nvme_dp.o 00:03:26.737 CC test/nvme/sgl/sgl.o 00:03:26.737 CC test/nvme/overhead/overhead.o 00:03:26.737 CC test/nvme/err_injection/err_injection.o 00:03:26.737 CXX test/cpp_headers/jsonrpc.o 00:03:26.737 CC test/nvme/startup/startup.o 00:03:26.737 CXX test/cpp_headers/keyring.o 00:03:26.737 CC test/nvme/reserve/reserve.o 00:03:26.997 LINK bdevperf 00:03:26.997 CXX test/cpp_headers/keyring_module.o 00:03:26.997 LINK err_injection 00:03:26.997 LINK startup 00:03:26.997 LINK sgl 00:03:26.997 LINK nvme_dp 00:03:26.997 CC test/accel/dif/dif.o 00:03:26.997 LINK overhead 00:03:26.997 LINK reserve 00:03:27.257 CXX test/cpp_headers/likely.o 00:03:27.257 CXX test/cpp_headers/log.o 00:03:27.257 CC test/blobfs/mkfs/mkfs.o 00:03:27.257 CXX test/cpp_headers/lvol.o 00:03:27.257 CC test/nvme/simple_copy/simple_copy.o 00:03:27.257 CC test/nvme/connect_stress/connect_stress.o 00:03:27.257 CC test/nvme/boot_partition/boot_partition.o 00:03:27.257 CC test/nvme/compliance/nvme_compliance.o 00:03:27.257 CXX test/cpp_headers/md5.o 00:03:27.516 CXX test/cpp_headers/memory.o 00:03:27.516 CC examples/nvmf/nvmf/nvmf.o 00:03:27.516 LINK mkfs 00:03:27.516 LINK connect_stress 00:03:27.516 LINK simple_copy 00:03:27.516 LINK boot_partition 00:03:27.516 CC test/lvol/esnap/esnap.o 00:03:27.516 CXX test/cpp_headers/mmio.o 00:03:27.775 CXX test/cpp_headers/nbd.o 00:03:27.775 CC test/nvme/fused_ordering/fused_ordering.o 00:03:27.775 CXX test/cpp_headers/net.o 00:03:27.775 LINK nvme_compliance 00:03:27.775 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:27.775 CXX test/cpp_headers/notify.o 00:03:27.775 CC test/nvme/fdp/fdp.o 00:03:27.775 CC test/nvme/cuse/cuse.o 00:03:27.775 LINK nvmf 00:03:27.775 LINK dif 00:03:27.775 CXX test/cpp_headers/nvme.o 00:03:27.775 CXX test/cpp_headers/nvme_intel.o 00:03:27.775 LINK fused_ordering 00:03:27.775 CXX test/cpp_headers/nvme_ocssd.o 00:03:27.775 LINK doorbell_aers 00:03:28.034 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:28.034 CXX test/cpp_headers/nvme_spec.o 00:03:28.034 CXX test/cpp_headers/nvme_zns.o 00:03:28.034 CXX test/cpp_headers/nvmf_cmd.o 00:03:28.034 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:28.034 CXX test/cpp_headers/nvmf.o 00:03:28.034 LINK fdp 00:03:28.293 CXX test/cpp_headers/nvmf_spec.o 00:03:28.293 CXX test/cpp_headers/nvmf_transport.o 00:03:28.293 CXX test/cpp_headers/opal.o 00:03:28.293 CC test/bdev/bdevio/bdevio.o 00:03:28.293 CXX test/cpp_headers/opal_spec.o 00:03:28.293 CXX test/cpp_headers/pci_ids.o 00:03:28.293 CXX test/cpp_headers/pipe.o 00:03:28.293 CXX test/cpp_headers/queue.o 00:03:28.293 CXX test/cpp_headers/reduce.o 00:03:28.293 CXX test/cpp_headers/rpc.o 00:03:28.293 CXX test/cpp_headers/scheduler.o 00:03:28.293 CXX test/cpp_headers/scsi.o 00:03:28.293 CXX test/cpp_headers/scsi_spec.o 00:03:28.293 CXX test/cpp_headers/sock.o 00:03:28.293 CXX test/cpp_headers/stdinc.o 00:03:28.559 CXX test/cpp_headers/string.o 00:03:28.559 CXX test/cpp_headers/thread.o 00:03:28.559 CXX test/cpp_headers/trace.o 00:03:28.559 CXX test/cpp_headers/trace_parser.o 00:03:28.559 CXX test/cpp_headers/tree.o 00:03:28.559 CXX test/cpp_headers/ublk.o 00:03:28.559 CXX test/cpp_headers/util.o 00:03:28.559 CXX test/cpp_headers/uuid.o 00:03:28.559 CXX test/cpp_headers/version.o 00:03:28.559 CXX test/cpp_headers/vfio_user_pci.o 00:03:28.559 LINK bdevio 00:03:28.559 CXX test/cpp_headers/vfio_user_spec.o 00:03:28.827 CXX test/cpp_headers/vhost.o 00:03:28.827 CXX test/cpp_headers/vmd.o 00:03:28.827 CXX test/cpp_headers/xor.o 00:03:28.827 CXX test/cpp_headers/zipf.o 00:03:29.085 LINK cuse 00:03:33.269 LINK esnap 00:03:33.836 00:03:33.836 real 1m33.887s 00:03:33.836 user 8m5.678s 00:03:33.836 sys 1m46.685s 00:03:33.836 09:15:59 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:33.836 09:15:59 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.836 ************************************ 00:03:33.836 END TEST make 00:03:33.836 ************************************ 00:03:33.836 09:15:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.836 09:15:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.836 09:15:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.836 09:15:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.836 09:15:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.836 09:15:59 -- pm/common@44 -- $ pid=5467 00:03:33.836 09:15:59 -- pm/common@50 -- $ kill -TERM 5467 00:03:33.836 09:15:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.836 09:15:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.836 09:15:59 -- pm/common@44 -- $ pid=5469 00:03:33.836 09:15:59 -- pm/common@50 -- $ kill -TERM 5469 00:03:33.836 09:15:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:33.836 09:15:59 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.836 09:15:59 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.836 09:15:59 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.836 09:15:59 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:34.095 09:15:59 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:34.095 09:15:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.095 09:15:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.095 09:15:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.095 09:15:59 -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.095 09:15:59 -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.095 09:15:59 -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.095 09:15:59 -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.095 09:15:59 -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.095 09:15:59 -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.095 09:15:59 -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.095 09:15:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.095 09:15:59 -- scripts/common.sh@344 -- # case "$op" in 00:03:34.095 09:15:59 -- scripts/common.sh@345 -- # : 1 00:03:34.095 09:15:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.095 09:15:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.095 09:15:59 -- scripts/common.sh@365 -- # decimal 1 00:03:34.095 09:15:59 -- scripts/common.sh@353 -- # local d=1 00:03:34.095 09:15:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.095 09:15:59 -- scripts/common.sh@355 -- # echo 1 00:03:34.095 09:15:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.095 09:15:59 -- scripts/common.sh@366 -- # decimal 2 00:03:34.095 09:15:59 -- scripts/common.sh@353 -- # local d=2 00:03:34.095 09:15:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.095 09:15:59 -- scripts/common.sh@355 -- # echo 2 00:03:34.095 09:15:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.095 09:15:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.095 09:15:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.095 09:15:59 -- scripts/common.sh@368 -- # return 0 00:03:34.095 09:15:59 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.095 09:15:59 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.095 --rc genhtml_branch_coverage=1 00:03:34.095 --rc genhtml_function_coverage=1 00:03:34.095 --rc genhtml_legend=1 00:03:34.095 --rc geninfo_all_blocks=1 00:03:34.095 --rc geninfo_unexecuted_blocks=1 00:03:34.095 00:03:34.095 ' 00:03:34.095 09:15:59 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.095 --rc genhtml_branch_coverage=1 00:03:34.095 --rc genhtml_function_coverage=1 00:03:34.095 --rc genhtml_legend=1 00:03:34.095 --rc geninfo_all_blocks=1 00:03:34.095 --rc geninfo_unexecuted_blocks=1 00:03:34.095 00:03:34.095 ' 00:03:34.095 09:15:59 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.095 --rc genhtml_branch_coverage=1 00:03:34.095 --rc genhtml_function_coverage=1 00:03:34.095 --rc genhtml_legend=1 00:03:34.095 --rc geninfo_all_blocks=1 00:03:34.095 --rc geninfo_unexecuted_blocks=1 00:03:34.095 00:03:34.095 ' 00:03:34.095 09:15:59 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:34.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.095 --rc genhtml_branch_coverage=1 00:03:34.095 --rc genhtml_function_coverage=1 00:03:34.095 --rc genhtml_legend=1 00:03:34.095 --rc geninfo_all_blocks=1 00:03:34.095 --rc geninfo_unexecuted_blocks=1 00:03:34.095 00:03:34.095 ' 00:03:34.095 09:15:59 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:34.095 09:15:59 -- nvmf/common.sh@7 -- # uname -s 00:03:34.095 09:15:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:34.095 09:15:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:34.095 09:15:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:34.095 09:15:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:34.095 09:15:59 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:34.095 09:15:59 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:03:34.095 09:15:59 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:34.095 09:15:59 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:03:34.095 09:15:59 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:440f9e5a-a2c8-4aa6-8016-1a270cad7677 00:03:34.095 09:15:59 -- nvmf/common.sh@16 -- # NVME_HOSTID=440f9e5a-a2c8-4aa6-8016-1a270cad7677 00:03:34.095 09:15:59 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:34.095 09:15:59 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:03:34.095 09:15:59 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:03:34.095 09:15:59 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:34.095 09:15:59 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:34.095 09:15:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:34.095 09:15:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:34.095 09:15:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.095 09:15:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.095 09:15:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.095 09:15:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.095 09:15:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.095 09:15:59 -- paths/export.sh@5 -- # export PATH 00:03:34.095 09:15:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.095 09:15:59 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:03:34.095 09:15:59 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:03:34.095 09:15:59 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:03:34.095 09:15:59 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:03:34.095 09:15:59 -- nvmf/common.sh@50 -- # : 0 00:03:34.095 09:15:59 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:03:34.095 09:15:59 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:03:34.095 09:15:59 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:03:34.095 09:15:59 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:34.095 09:15:59 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:34.095 09:15:59 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:03:34.095 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:03:34.095 09:15:59 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:03:34.095 09:15:59 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:03:34.095 09:15:59 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:03:34.095 09:15:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:34.095 09:15:59 -- spdk/autotest.sh@32 -- # uname -s 00:03:34.095 09:15:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:34.095 09:15:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:34.095 09:15:59 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.095 09:15:59 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:34.095 09:15:59 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.095 09:15:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:34.095 09:15:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:34.095 09:15:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:34.095 09:15:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:34.095 09:15:59 -- spdk/autotest.sh@48 -- # udevadm_pid=54533 00:03:34.096 09:15:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:34.096 09:15:59 -- pm/common@17 -- # local monitor 00:03:34.096 09:15:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.096 09:15:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.096 09:15:59 -- pm/common@25 -- # sleep 1 00:03:34.096 09:15:59 -- pm/common@21 -- # date +%s 00:03:34.096 09:15:59 -- pm/common@21 -- # date +%s 00:03:34.096 09:15:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732094159 00:03:34.096 09:15:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732094159 00:03:34.096 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732094159_collect-cpu-load.pm.log 00:03:34.096 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732094159_collect-vmstat.pm.log 00:03:35.031 09:16:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:35.031 09:16:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:35.031 09:16:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.031 09:16:00 -- common/autotest_common.sh@10 -- # set +x 00:03:35.031 09:16:00 -- spdk/autotest.sh@59 -- # create_test_list 00:03:35.031 09:16:00 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:35.031 09:16:00 -- common/autotest_common.sh@10 -- # set +x 00:03:35.290 09:16:00 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:35.290 09:16:00 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:35.290 09:16:00 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:35.290 09:16:00 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:35.290 09:16:00 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:35.290 09:16:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:35.290 09:16:00 -- common/autotest_common.sh@1457 -- # uname 00:03:35.290 09:16:00 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:35.290 09:16:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:35.290 09:16:00 -- common/autotest_common.sh@1477 -- # uname 00:03:35.290 09:16:00 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:35.290 09:16:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:35.290 09:16:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:35.290 lcov: LCOV version 1.15 00:03:35.290 09:16:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:50.169 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:50.169 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:08.278 09:16:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:08.278 09:16:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.278 09:16:31 -- common/autotest_common.sh@10 -- # set +x 00:04:08.278 09:16:31 -- spdk/autotest.sh@78 -- # rm -f 00:04:08.278 09:16:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.278 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:08.278 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:08.278 09:16:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:08.278 09:16:31 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:08.278 09:16:31 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:08.278 09:16:31 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:08.278 09:16:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:08.278 09:16:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:08.278 09:16:31 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:08.278 09:16:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.278 09:16:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.278 09:16:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:08.278 09:16:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:08.278 09:16:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:08.278 09:16:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:08.278 09:16:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.278 09:16:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:08.278 09:16:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:08.278 09:16:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:08.278 09:16:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:08.278 09:16:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.278 09:16:31 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:08.278 09:16:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:08.278 09:16:31 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:08.278 09:16:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:08.278 09:16:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:08.278 09:16:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:08.278 09:16:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.278 09:16:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.278 09:16:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:08.278 09:16:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:08.278 09:16:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.278 No valid GPT data, bailing 00:04:08.278 09:16:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.278 09:16:32 -- scripts/common.sh@394 -- # pt= 00:04:08.278 09:16:32 -- scripts/common.sh@395 -- # return 1 00:04:08.278 09:16:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.278 1+0 records in 00:04:08.278 1+0 records out 00:04:08.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00944911 s, 111 MB/s 00:04:08.278 09:16:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.278 09:16:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.278 09:16:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:08.278 09:16:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:08.278 09:16:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:08.278 No valid GPT data, bailing 00:04:08.278 09:16:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:08.278 09:16:32 -- scripts/common.sh@394 -- # pt= 00:04:08.278 09:16:32 -- scripts/common.sh@395 -- # return 1 00:04:08.278 09:16:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:08.278 1+0 records in 00:04:08.278 1+0 records out 00:04:08.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00632543 s, 166 MB/s 00:04:08.278 09:16:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.278 09:16:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.278 09:16:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:08.278 09:16:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:08.278 09:16:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:08.278 No valid GPT data, bailing 00:04:08.278 09:16:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:08.278 09:16:32 -- scripts/common.sh@394 -- # pt= 00:04:08.278 09:16:32 -- scripts/common.sh@395 -- # return 1 00:04:08.278 09:16:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:08.278 1+0 records in 00:04:08.278 1+0 records out 00:04:08.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059513 s, 176 MB/s 00:04:08.278 09:16:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.278 09:16:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:08.278 09:16:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:08.278 09:16:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:08.278 09:16:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:08.278 No valid GPT data, bailing 00:04:08.278 09:16:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:08.278 09:16:32 -- scripts/common.sh@394 -- # pt= 00:04:08.278 09:16:32 -- scripts/common.sh@395 -- # return 1 00:04:08.278 09:16:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:08.278 1+0 records in 00:04:08.278 1+0 records out 00:04:08.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627123 s, 167 MB/s 00:04:08.278 09:16:32 -- spdk/autotest.sh@105 -- # sync 00:04:08.278 09:16:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.278 09:16:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.278 09:16:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:10.184 09:16:35 -- spdk/autotest.sh@111 -- # uname -s 00:04:10.184 09:16:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:10.184 09:16:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:10.184 09:16:35 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:10.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.754 Hugepages 00:04:10.754 node hugesize free / total 00:04:10.754 node0 1048576kB 0 / 0 00:04:10.754 node0 2048kB 0 / 0 00:04:10.754 00:04:10.754 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.754 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:11.014 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:11.014 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:11.014 09:16:36 -- spdk/autotest.sh@117 -- # uname -s 00:04:11.014 09:16:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:11.014 09:16:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:11.014 09:16:36 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.952 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.952 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.952 09:16:37 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:13.330 09:16:38 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:13.330 09:16:38 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:13.330 09:16:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.330 09:16:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:13.330 09:16:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:13.330 09:16:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:13.330 09:16:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.330 09:16:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.330 09:16:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:13.330 09:16:38 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:13.330 09:16:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:13.330 09:16:38 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.589 Waiting for block devices as requested 00:04:13.849 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:13.849 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:13.849 09:16:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:13.849 09:16:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:13.849 09:16:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:13.849 09:16:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:13.849 09:16:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:13.849 09:16:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:13.849 09:16:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:13.849 09:16:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:13.849 09:16:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:13.849 09:16:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:13.849 09:16:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:13.849 09:16:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:13.849 09:16:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.108 09:16:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.108 09:16:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.108 09:16:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.108 09:16:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:14.108 09:16:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.108 09:16:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.108 09:16:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.108 09:16:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.108 09:16:39 -- common/autotest_common.sh@1543 -- # continue 00:04:14.108 09:16:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:14.108 09:16:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:14.108 09:16:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:14.108 09:16:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:14.108 09:16:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:14.108 09:16:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:14.108 09:16:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:14.108 09:16:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:14.108 09:16:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:14.108 09:16:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:14.108 09:16:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:14.108 09:16:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:14.108 09:16:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.108 09:16:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.108 09:16:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.108 09:16:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.108 09:16:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:14.108 09:16:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.108 09:16:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.108 09:16:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.108 09:16:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.108 09:16:39 -- common/autotest_common.sh@1543 -- # continue 00:04:14.108 09:16:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:14.108 09:16:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.108 09:16:39 -- common/autotest_common.sh@10 -- # set +x 00:04:14.108 09:16:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:14.108 09:16:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.108 09:16:39 -- common/autotest_common.sh@10 -- # set +x 00:04:14.108 09:16:39 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.047 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.047 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.047 09:16:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:15.047 09:16:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.047 09:16:40 -- common/autotest_common.sh@10 -- # set +x 00:04:15.047 09:16:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:15.047 09:16:40 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:15.047 09:16:40 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:15.047 09:16:40 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:15.047 09:16:40 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:15.047 09:16:40 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:15.047 09:16:40 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:15.047 09:16:40 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:15.047 09:16:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:15.047 09:16:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:15.047 09:16:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.047 09:16:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:15.047 09:16:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:15.307 09:16:40 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:15.307 09:16:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:15.307 09:16:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.307 09:16:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:15.307 09:16:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.307 09:16:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.307 09:16:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.307 09:16:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:15.307 09:16:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.308 09:16:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.308 09:16:40 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:15.308 09:16:40 -- common/autotest_common.sh@1572 -- # return 0 00:04:15.308 09:16:40 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:15.308 09:16:40 -- common/autotest_common.sh@1580 -- # return 0 00:04:15.308 09:16:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:15.308 09:16:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:15.308 09:16:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.308 09:16:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.308 09:16:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:15.308 09:16:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.308 09:16:40 -- common/autotest_common.sh@10 -- # set +x 00:04:15.308 09:16:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:15.308 09:16:40 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.308 09:16:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.308 09:16:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.308 09:16:40 -- common/autotest_common.sh@10 -- # set +x 00:04:15.308 ************************************ 00:04:15.308 START TEST env 00:04:15.308 ************************************ 00:04:15.308 09:16:40 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.308 * Looking for test storage... 00:04:15.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:15.308 09:16:40 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.308 09:16:40 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.308 09:16:40 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.568 09:16:40 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.568 09:16:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.568 09:16:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.568 09:16:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.568 09:16:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.568 09:16:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.568 09:16:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.568 09:16:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.568 09:16:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.568 09:16:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.568 09:16:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.568 09:16:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.568 09:16:40 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.568 09:16:40 env -- scripts/common.sh@345 -- # : 1 00:04:15.568 09:16:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.568 09:16:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.568 09:16:40 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.568 09:16:40 env -- scripts/common.sh@353 -- # local d=1 00:04:15.568 09:16:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.568 09:16:40 env -- scripts/common.sh@355 -- # echo 1 00:04:15.568 09:16:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.568 09:16:40 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.568 09:16:40 env -- scripts/common.sh@353 -- # local d=2 00:04:15.568 09:16:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.568 09:16:40 env -- scripts/common.sh@355 -- # echo 2 00:04:15.568 09:16:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.568 09:16:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.568 09:16:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.568 09:16:40 env -- scripts/common.sh@368 -- # return 0 00:04:15.568 09:16:40 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.568 09:16:40 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.568 --rc genhtml_branch_coverage=1 00:04:15.568 --rc genhtml_function_coverage=1 00:04:15.568 --rc genhtml_legend=1 00:04:15.568 --rc geninfo_all_blocks=1 00:04:15.568 --rc geninfo_unexecuted_blocks=1 00:04:15.568 00:04:15.568 ' 00:04:15.568 09:16:40 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.568 --rc genhtml_branch_coverage=1 00:04:15.568 --rc genhtml_function_coverage=1 00:04:15.568 --rc genhtml_legend=1 00:04:15.568 --rc geninfo_all_blocks=1 00:04:15.568 --rc geninfo_unexecuted_blocks=1 00:04:15.568 00:04:15.568 ' 00:04:15.568 09:16:40 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.568 --rc genhtml_branch_coverage=1 00:04:15.568 --rc genhtml_function_coverage=1 00:04:15.568 --rc genhtml_legend=1 00:04:15.568 --rc geninfo_all_blocks=1 00:04:15.568 --rc geninfo_unexecuted_blocks=1 00:04:15.568 00:04:15.568 ' 00:04:15.568 09:16:40 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.568 --rc genhtml_branch_coverage=1 00:04:15.568 --rc genhtml_function_coverage=1 00:04:15.568 --rc genhtml_legend=1 00:04:15.568 --rc geninfo_all_blocks=1 00:04:15.568 --rc geninfo_unexecuted_blocks=1 00:04:15.568 00:04:15.568 ' 00:04:15.568 09:16:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.568 09:16:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.568 09:16:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.568 09:16:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.568 ************************************ 00:04:15.568 START TEST env_memory 00:04:15.568 ************************************ 00:04:15.568 09:16:40 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.568 00:04:15.568 00:04:15.568 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.568 http://cunit.sourceforge.net/ 00:04:15.568 00:04:15.568 00:04:15.568 Suite: memory 00:04:15.568 Test: alloc and free memory map ...[2024-11-20 09:16:40.894550] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.568 passed 00:04:15.568 Test: mem map translation ...[2024-11-20 09:16:40.938536] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.568 [2024-11-20 09:16:40.938584] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.569 [2024-11-20 09:16:40.938663] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.569 [2024-11-20 09:16:40.938681] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.569 passed 00:04:15.569 Test: mem map registration ...[2024-11-20 09:16:41.006171] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:15.569 [2024-11-20 09:16:41.006212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:15.829 passed 00:04:15.829 Test: mem map adjacent registrations ...passed 00:04:15.829 00:04:15.829 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.829 suites 1 1 n/a 0 0 00:04:15.829 tests 4 4 4 0 0 00:04:15.829 asserts 152 152 152 0 n/a 00:04:15.829 00:04:15.829 Elapsed time = 0.243 seconds 00:04:15.829 00:04:15.829 real 0m0.284s 00:04:15.829 user 0m0.248s 00:04:15.829 sys 0m0.029s 00:04:15.829 09:16:41 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.829 09:16:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.829 ************************************ 00:04:15.829 END TEST env_memory 00:04:15.829 ************************************ 00:04:15.829 09:16:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:15.829 09:16:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.829 09:16:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.829 09:16:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.829 ************************************ 00:04:15.829 START TEST env_vtophys 00:04:15.829 ************************************ 00:04:15.829 09:16:41 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:15.829 EAL: lib.eal log level changed from notice to debug 00:04:15.829 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.829 EAL: Detected lcore 1 as core 0 on socket 0 00:04:15.829 EAL: Detected lcore 2 as core 0 on socket 0 00:04:15.829 EAL: Detected lcore 3 as core 0 on socket 0 00:04:15.829 EAL: Detected lcore 4 as core 0 on socket 0 00:04:15.829 EAL: Detected lcore 5 as core 0 on socket 0 00:04:15.829 EAL: Detected lcore 6 as core 0 on socket 0 00:04:15.829 EAL: Detected lcore 7 as core 0 on socket 0 00:04:15.829 EAL: Detected lcore 8 as core 0 on socket 0 00:04:15.829 EAL: Detected lcore 9 as core 0 on socket 0 00:04:15.829 EAL: Maximum logical cores by configuration: 128 00:04:15.829 EAL: Detected CPU lcores: 10 00:04:15.829 EAL: Detected NUMA nodes: 1 00:04:15.829 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:15.829 EAL: Detected shared linkage of DPDK 00:04:15.829 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.829 EAL: Selected IOVA mode 'PA' 00:04:15.829 EAL: Probing VFIO support... 00:04:15.829 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:15.829 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:15.829 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.829 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.829 EAL: Setting up physically contiguous memory... 00:04:15.829 EAL: Setting maximum number of open files to 524288 00:04:15.829 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.829 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.829 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.829 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.829 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.829 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.829 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.829 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.829 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.829 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.829 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.829 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.829 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.829 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.829 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.829 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.829 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.829 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.829 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.829 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.829 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.829 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.829 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.829 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.829 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.829 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.829 EAL: Hugepages will be freed exactly as allocated. 00:04:15.829 EAL: No shared files mode enabled, IPC is disabled 00:04:15.829 EAL: No shared files mode enabled, IPC is disabled 00:04:16.089 EAL: TSC frequency is ~2290000 KHz 00:04:16.089 EAL: Main lcore 0 is ready (tid=7f232701ba40;cpuset=[0]) 00:04:16.089 EAL: Trying to obtain current memory policy. 00:04:16.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.089 EAL: Restoring previous memory policy: 0 00:04:16.089 EAL: request: mp_malloc_sync 00:04:16.089 EAL: No shared files mode enabled, IPC is disabled 00:04:16.089 EAL: Heap on socket 0 was expanded by 2MB 00:04:16.089 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:16.089 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:16.089 EAL: Mem event callback 'spdk:(nil)' registered 00:04:16.089 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:16.089 00:04:16.089 00:04:16.089 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.089 http://cunit.sourceforge.net/ 00:04:16.089 00:04:16.089 00:04:16.089 Suite: components_suite 00:04:16.350 Test: vtophys_malloc_test ...passed 00:04:16.350 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:16.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.350 EAL: Restoring previous memory policy: 4 00:04:16.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.350 EAL: request: mp_malloc_sync 00:04:16.350 EAL: No shared files mode enabled, IPC is disabled 00:04:16.350 EAL: Heap on socket 0 was expanded by 4MB 00:04:16.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.350 EAL: request: mp_malloc_sync 00:04:16.350 EAL: No shared files mode enabled, IPC is disabled 00:04:16.350 EAL: Heap on socket 0 was shrunk by 4MB 00:04:16.350 EAL: Trying to obtain current memory policy. 00:04:16.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.350 EAL: Restoring previous memory policy: 4 00:04:16.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.350 EAL: request: mp_malloc_sync 00:04:16.350 EAL: No shared files mode enabled, IPC is disabled 00:04:16.350 EAL: Heap on socket 0 was expanded by 6MB 00:04:16.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.350 EAL: request: mp_malloc_sync 00:04:16.350 EAL: No shared files mode enabled, IPC is disabled 00:04:16.350 EAL: Heap on socket 0 was shrunk by 6MB 00:04:16.350 EAL: Trying to obtain current memory policy. 00:04:16.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.350 EAL: Restoring previous memory policy: 4 00:04:16.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.350 EAL: request: mp_malloc_sync 00:04:16.350 EAL: No shared files mode enabled, IPC is disabled 00:04:16.350 EAL: Heap on socket 0 was expanded by 10MB 00:04:16.612 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.612 EAL: request: mp_malloc_sync 00:04:16.612 EAL: No shared files mode enabled, IPC is disabled 00:04:16.612 EAL: Heap on socket 0 was shrunk by 10MB 00:04:16.612 EAL: Trying to obtain current memory policy. 00:04:16.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.612 EAL: Restoring previous memory policy: 4 00:04:16.612 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.612 EAL: request: mp_malloc_sync 00:04:16.612 EAL: No shared files mode enabled, IPC is disabled 00:04:16.612 EAL: Heap on socket 0 was expanded by 18MB 00:04:16.612 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.612 EAL: request: mp_malloc_sync 00:04:16.612 EAL: No shared files mode enabled, IPC is disabled 00:04:16.612 EAL: Heap on socket 0 was shrunk by 18MB 00:04:16.612 EAL: Trying to obtain current memory policy. 00:04:16.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.612 EAL: Restoring previous memory policy: 4 00:04:16.612 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.612 EAL: request: mp_malloc_sync 00:04:16.612 EAL: No shared files mode enabled, IPC is disabled 00:04:16.612 EAL: Heap on socket 0 was expanded by 34MB 00:04:16.612 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.612 EAL: request: mp_malloc_sync 00:04:16.612 EAL: No shared files mode enabled, IPC is disabled 00:04:16.612 EAL: Heap on socket 0 was shrunk by 34MB 00:04:16.612 EAL: Trying to obtain current memory policy. 00:04:16.612 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.612 EAL: Restoring previous memory policy: 4 00:04:16.612 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.612 EAL: request: mp_malloc_sync 00:04:16.612 EAL: No shared files mode enabled, IPC is disabled 00:04:16.612 EAL: Heap on socket 0 was expanded by 66MB 00:04:16.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.877 EAL: request: mp_malloc_sync 00:04:16.877 EAL: No shared files mode enabled, IPC is disabled 00:04:16.877 EAL: Heap on socket 0 was shrunk by 66MB 00:04:16.877 EAL: Trying to obtain current memory policy. 00:04:16.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.877 EAL: Restoring previous memory policy: 4 00:04:16.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.877 EAL: request: mp_malloc_sync 00:04:16.877 EAL: No shared files mode enabled, IPC is disabled 00:04:16.877 EAL: Heap on socket 0 was expanded by 130MB 00:04:17.137 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.137 EAL: request: mp_malloc_sync 00:04:17.137 EAL: No shared files mode enabled, IPC is disabled 00:04:17.137 EAL: Heap on socket 0 was shrunk by 130MB 00:04:17.396 EAL: Trying to obtain current memory policy. 00:04:17.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.396 EAL: Restoring previous memory policy: 4 00:04:17.396 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.396 EAL: request: mp_malloc_sync 00:04:17.396 EAL: No shared files mode enabled, IPC is disabled 00:04:17.396 EAL: Heap on socket 0 was expanded by 258MB 00:04:17.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.964 EAL: request: mp_malloc_sync 00:04:17.965 EAL: No shared files mode enabled, IPC is disabled 00:04:17.965 EAL: Heap on socket 0 was shrunk by 258MB 00:04:18.533 EAL: Trying to obtain current memory policy. 00:04:18.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.533 EAL: Restoring previous memory policy: 4 00:04:18.533 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.533 EAL: request: mp_malloc_sync 00:04:18.533 EAL: No shared files mode enabled, IPC is disabled 00:04:18.533 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.470 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.470 EAL: request: mp_malloc_sync 00:04:19.470 EAL: No shared files mode enabled, IPC is disabled 00:04:19.470 EAL: Heap on socket 0 was shrunk by 514MB 00:04:20.408 EAL: Trying to obtain current memory policy. 00:04:20.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.668 EAL: Restoring previous memory policy: 4 00:04:20.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.668 EAL: request: mp_malloc_sync 00:04:20.668 EAL: No shared files mode enabled, IPC is disabled 00:04:20.668 EAL: Heap on socket 0 was expanded by 1026MB 00:04:22.577 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.577 EAL: request: mp_malloc_sync 00:04:22.577 EAL: No shared files mode enabled, IPC is disabled 00:04:22.577 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:24.483 passed 00:04:24.483 00:04:24.483 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.483 suites 1 1 n/a 0 0 00:04:24.483 tests 2 2 2 0 0 00:04:24.483 asserts 5768 5768 5768 0 n/a 00:04:24.483 00:04:24.483 Elapsed time = 8.113 seconds 00:04:24.483 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.483 EAL: request: mp_malloc_sync 00:04:24.483 EAL: No shared files mode enabled, IPC is disabled 00:04:24.483 EAL: Heap on socket 0 was shrunk by 2MB 00:04:24.483 EAL: No shared files mode enabled, IPC is disabled 00:04:24.483 EAL: No shared files mode enabled, IPC is disabled 00:04:24.483 EAL: No shared files mode enabled, IPC is disabled 00:04:24.483 00:04:24.483 real 0m8.429s 00:04:24.483 user 0m7.481s 00:04:24.483 sys 0m0.788s 00:04:24.483 09:16:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.483 09:16:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:24.483 ************************************ 00:04:24.483 END TEST env_vtophys 00:04:24.483 ************************************ 00:04:24.483 09:16:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.483 09:16:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.483 09:16:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.483 09:16:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.483 ************************************ 00:04:24.483 START TEST env_pci 00:04:24.483 ************************************ 00:04:24.483 09:16:49 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.483 00:04:24.483 00:04:24.483 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.483 http://cunit.sourceforge.net/ 00:04:24.483 00:04:24.483 00:04:24.484 Suite: pci 00:04:24.484 Test: pci_hook ...[2024-11-20 09:16:49.720732] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56869 has claimed it 00:04:24.484 passed 00:04:24.484 00:04:24.484 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.484 suites 1 1 n/a 0 0 00:04:24.484 tests 1 1 1 0 0 00:04:24.484 asserts 25 25 25 0 n/a 00:04:24.484 00:04:24.484 Elapsed time = 0.005 seconds 00:04:24.484 EAL: Cannot find device (10000:00:01.0) 00:04:24.484 EAL: Failed to attach device on primary process 00:04:24.484 00:04:24.484 real 0m0.101s 00:04:24.484 user 0m0.048s 00:04:24.484 sys 0m0.052s 00:04:24.484 09:16:49 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.484 09:16:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:24.484 ************************************ 00:04:24.484 END TEST env_pci 00:04:24.484 ************************************ 00:04:24.484 09:16:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:24.484 09:16:49 env -- env/env.sh@15 -- # uname 00:04:24.484 09:16:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:24.484 09:16:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:24.484 09:16:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.484 09:16:49 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:24.484 09:16:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.484 09:16:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.484 ************************************ 00:04:24.484 START TEST env_dpdk_post_init 00:04:24.484 ************************************ 00:04:24.484 09:16:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.484 EAL: Detected CPU lcores: 10 00:04:24.484 EAL: Detected NUMA nodes: 1 00:04:24.484 EAL: Detected shared linkage of DPDK 00:04:24.484 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.743 EAL: Selected IOVA mode 'PA' 00:04:24.743 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.743 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:24.743 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:24.743 Starting DPDK initialization... 00:04:24.743 Starting SPDK post initialization... 00:04:24.743 SPDK NVMe probe 00:04:24.743 Attaching to 0000:00:10.0 00:04:24.743 Attaching to 0000:00:11.0 00:04:24.743 Attached to 0000:00:10.0 00:04:24.743 Attached to 0000:00:11.0 00:04:24.743 Cleaning up... 00:04:24.743 00:04:24.743 real 0m0.288s 00:04:24.743 user 0m0.086s 00:04:24.743 sys 0m0.100s 00:04:24.743 09:16:50 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.743 09:16:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.743 ************************************ 00:04:24.743 END TEST env_dpdk_post_init 00:04:24.743 ************************************ 00:04:24.743 09:16:50 env -- env/env.sh@26 -- # uname 00:04:24.743 09:16:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.743 09:16:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.743 09:16:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.743 09:16:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.743 09:16:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.743 ************************************ 00:04:24.743 START TEST env_mem_callbacks 00:04:24.743 ************************************ 00:04:24.743 09:16:50 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.003 EAL: Detected CPU lcores: 10 00:04:25.003 EAL: Detected NUMA nodes: 1 00:04:25.003 EAL: Detected shared linkage of DPDK 00:04:25.003 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.003 EAL: Selected IOVA mode 'PA' 00:04:25.003 00:04:25.003 00:04:25.003 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.003 http://cunit.sourceforge.net/ 00:04:25.003 00:04:25.003 00:04:25.003 Suite: memory 00:04:25.003 Test: test ... 00:04:25.003 register 0x200000200000 2097152 00:04:25.003 malloc 3145728 00:04:25.003 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.003 register 0x200000400000 4194304 00:04:25.003 buf 0x2000004fffc0 len 3145728 PASSED 00:04:25.003 malloc 64 00:04:25.003 buf 0x2000004ffec0 len 64 PASSED 00:04:25.003 malloc 4194304 00:04:25.003 register 0x200000800000 6291456 00:04:25.003 buf 0x2000009fffc0 len 4194304 PASSED 00:04:25.003 free 0x2000004fffc0 3145728 00:04:25.003 free 0x2000004ffec0 64 00:04:25.003 unregister 0x200000400000 4194304 PASSED 00:04:25.003 free 0x2000009fffc0 4194304 00:04:25.003 unregister 0x200000800000 6291456 PASSED 00:04:25.003 malloc 8388608 00:04:25.003 register 0x200000400000 10485760 00:04:25.003 buf 0x2000005fffc0 len 8388608 PASSED 00:04:25.003 free 0x2000005fffc0 8388608 00:04:25.003 unregister 0x200000400000 10485760 PASSED 00:04:25.003 passed 00:04:25.003 00:04:25.003 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.003 suites 1 1 n/a 0 0 00:04:25.003 tests 1 1 1 0 0 00:04:25.003 asserts 15 15 15 0 n/a 00:04:25.003 00:04:25.003 Elapsed time = 0.068 seconds 00:04:25.003 00:04:25.003 real 0m0.261s 00:04:25.003 user 0m0.095s 00:04:25.003 sys 0m0.065s 00:04:25.003 09:16:50 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.003 09:16:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:25.003 ************************************ 00:04:25.003 END TEST env_mem_callbacks 00:04:25.003 ************************************ 00:04:25.261 00:04:25.261 real 0m9.922s 00:04:25.261 user 0m8.203s 00:04:25.261 sys 0m1.366s 00:04:25.261 09:16:50 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.261 09:16:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.261 ************************************ 00:04:25.261 END TEST env 00:04:25.261 ************************************ 00:04:25.261 09:16:50 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.261 09:16:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.261 09:16:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.261 09:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:25.261 ************************************ 00:04:25.261 START TEST rpc 00:04:25.261 ************************************ 00:04:25.261 09:16:50 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.261 * Looking for test storage... 00:04:25.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.261 09:16:50 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.261 09:16:50 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.261 09:16:50 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.520 09:16:50 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.520 09:16:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.520 09:16:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.520 09:16:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.520 09:16:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.520 09:16:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.520 09:16:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.520 09:16:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.520 09:16:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.520 09:16:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.520 09:16:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.520 09:16:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.520 09:16:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.520 09:16:50 rpc -- scripts/common.sh@345 -- # : 1 00:04:25.520 09:16:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.520 09:16:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.520 09:16:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.520 09:16:50 rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.520 09:16:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.520 09:16:50 rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.520 09:16:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.520 09:16:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.520 09:16:50 rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.520 09:16:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.520 09:16:50 rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.520 09:16:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.520 09:16:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.520 09:16:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.520 09:16:50 rpc -- scripts/common.sh@368 -- # return 0 00:04:25.520 09:16:50 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.520 09:16:50 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.520 --rc genhtml_branch_coverage=1 00:04:25.520 --rc genhtml_function_coverage=1 00:04:25.520 --rc genhtml_legend=1 00:04:25.520 --rc geninfo_all_blocks=1 00:04:25.520 --rc geninfo_unexecuted_blocks=1 00:04:25.520 00:04:25.520 ' 00:04:25.520 09:16:50 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.521 --rc genhtml_branch_coverage=1 00:04:25.521 --rc genhtml_function_coverage=1 00:04:25.521 --rc genhtml_legend=1 00:04:25.521 --rc geninfo_all_blocks=1 00:04:25.521 --rc geninfo_unexecuted_blocks=1 00:04:25.521 00:04:25.521 ' 00:04:25.521 09:16:50 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.521 --rc genhtml_branch_coverage=1 00:04:25.521 --rc genhtml_function_coverage=1 00:04:25.521 --rc genhtml_legend=1 00:04:25.521 --rc geninfo_all_blocks=1 00:04:25.521 --rc geninfo_unexecuted_blocks=1 00:04:25.521 00:04:25.521 ' 00:04:25.521 09:16:50 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.521 --rc genhtml_branch_coverage=1 00:04:25.521 --rc genhtml_function_coverage=1 00:04:25.521 --rc genhtml_legend=1 00:04:25.521 --rc geninfo_all_blocks=1 00:04:25.521 --rc geninfo_unexecuted_blocks=1 00:04:25.521 00:04:25.521 ' 00:04:25.521 09:16:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:25.521 09:16:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56996 00:04:25.521 09:16:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.521 09:16:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56996 00:04:25.521 09:16:50 rpc -- common/autotest_common.sh@835 -- # '[' -z 56996 ']' 00:04:25.521 09:16:50 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.521 09:16:50 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.521 09:16:50 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.521 09:16:50 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.521 09:16:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.521 [2024-11-20 09:16:50.899335] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:25.521 [2024-11-20 09:16:50.899486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56996 ] 00:04:25.780 [2024-11-20 09:16:51.078042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.780 [2024-11-20 09:16:51.191591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:25.780 [2024-11-20 09:16:51.191648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56996' to capture a snapshot of events at runtime. 00:04:25.780 [2024-11-20 09:16:51.191659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:25.780 [2024-11-20 09:16:51.191669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:25.780 [2024-11-20 09:16:51.191677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56996 for offline analysis/debug. 00:04:25.780 [2024-11-20 09:16:51.192793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.715 09:16:52 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.715 09:16:52 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:26.715 09:16:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.715 09:16:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.715 09:16:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:26.715 09:16:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:26.715 09:16:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.715 09:16:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.715 09:16:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.715 ************************************ 00:04:26.715 START TEST rpc_integrity 00:04:26.715 ************************************ 00:04:26.715 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:26.715 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.715 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.715 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.715 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.715 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.715 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.715 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.715 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.715 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.715 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.715 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.715 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:26.715 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.715 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.715 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.974 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.974 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.974 { 00:04:26.974 "name": "Malloc0", 00:04:26.974 "aliases": [ 00:04:26.974 "de101521-b5f6-4ab7-8909-db442787da55" 00:04:26.974 ], 00:04:26.974 "product_name": "Malloc disk", 00:04:26.974 "block_size": 512, 00:04:26.974 "num_blocks": 16384, 00:04:26.974 "uuid": "de101521-b5f6-4ab7-8909-db442787da55", 00:04:26.974 "assigned_rate_limits": { 00:04:26.974 "rw_ios_per_sec": 0, 00:04:26.974 "rw_mbytes_per_sec": 0, 00:04:26.974 "r_mbytes_per_sec": 0, 00:04:26.974 "w_mbytes_per_sec": 0 00:04:26.974 }, 00:04:26.974 "claimed": false, 00:04:26.974 "zoned": false, 00:04:26.974 "supported_io_types": { 00:04:26.974 "read": true, 00:04:26.974 "write": true, 00:04:26.974 "unmap": true, 00:04:26.974 "flush": true, 00:04:26.974 "reset": true, 00:04:26.974 "nvme_admin": false, 00:04:26.974 "nvme_io": false, 00:04:26.974 "nvme_io_md": false, 00:04:26.974 "write_zeroes": true, 00:04:26.974 "zcopy": true, 00:04:26.974 "get_zone_info": false, 00:04:26.974 "zone_management": false, 00:04:26.974 "zone_append": false, 00:04:26.974 "compare": false, 00:04:26.974 "compare_and_write": false, 00:04:26.974 "abort": true, 00:04:26.974 "seek_hole": false, 00:04:26.974 "seek_data": false, 00:04:26.974 "copy": true, 00:04:26.974 "nvme_iov_md": false 00:04:26.974 }, 00:04:26.974 "memory_domains": [ 00:04:26.974 { 00:04:26.974 "dma_device_id": "system", 00:04:26.974 "dma_device_type": 1 00:04:26.974 }, 00:04:26.974 { 00:04:26.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.974 "dma_device_type": 2 00:04:26.974 } 00:04:26.974 ], 00:04:26.974 "driver_specific": {} 00:04:26.974 } 00:04:26.974 ]' 00:04:26.974 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.974 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.974 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:26.974 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.974 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.974 [2024-11-20 09:16:52.247725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:26.974 [2024-11-20 09:16:52.247779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.974 [2024-11-20 09:16:52.247805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:26.974 [2024-11-20 09:16:52.247820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.974 [2024-11-20 09:16:52.250024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.974 [2024-11-20 09:16:52.250063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.974 Passthru0 00:04:26.974 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.974 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.974 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.974 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.974 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.974 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.974 { 00:04:26.974 "name": "Malloc0", 00:04:26.974 "aliases": [ 00:04:26.974 "de101521-b5f6-4ab7-8909-db442787da55" 00:04:26.974 ], 00:04:26.974 "product_name": "Malloc disk", 00:04:26.974 "block_size": 512, 00:04:26.974 "num_blocks": 16384, 00:04:26.974 "uuid": "de101521-b5f6-4ab7-8909-db442787da55", 00:04:26.974 "assigned_rate_limits": { 00:04:26.974 "rw_ios_per_sec": 0, 00:04:26.974 "rw_mbytes_per_sec": 0, 00:04:26.974 "r_mbytes_per_sec": 0, 00:04:26.974 "w_mbytes_per_sec": 0 00:04:26.974 }, 00:04:26.974 "claimed": true, 00:04:26.974 "claim_type": "exclusive_write", 00:04:26.974 "zoned": false, 00:04:26.974 "supported_io_types": { 00:04:26.974 "read": true, 00:04:26.974 "write": true, 00:04:26.974 "unmap": true, 00:04:26.974 "flush": true, 00:04:26.974 "reset": true, 00:04:26.974 "nvme_admin": false, 00:04:26.974 "nvme_io": false, 00:04:26.974 "nvme_io_md": false, 00:04:26.974 "write_zeroes": true, 00:04:26.974 "zcopy": true, 00:04:26.974 "get_zone_info": false, 00:04:26.974 "zone_management": false, 00:04:26.974 "zone_append": false, 00:04:26.974 "compare": false, 00:04:26.974 "compare_and_write": false, 00:04:26.974 "abort": true, 00:04:26.974 "seek_hole": false, 00:04:26.974 "seek_data": false, 00:04:26.974 "copy": true, 00:04:26.974 "nvme_iov_md": false 00:04:26.974 }, 00:04:26.974 "memory_domains": [ 00:04:26.974 { 00:04:26.974 "dma_device_id": "system", 00:04:26.974 "dma_device_type": 1 00:04:26.974 }, 00:04:26.974 { 00:04:26.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.974 "dma_device_type": 2 00:04:26.974 } 00:04:26.974 ], 00:04:26.974 "driver_specific": {} 00:04:26.974 }, 00:04:26.974 { 00:04:26.974 "name": "Passthru0", 00:04:26.974 "aliases": [ 00:04:26.974 "d0b307f1-eaf7-5c90-b0d6-ba5bce4066fe" 00:04:26.974 ], 00:04:26.974 "product_name": "passthru", 00:04:26.974 "block_size": 512, 00:04:26.974 "num_blocks": 16384, 00:04:26.974 "uuid": "d0b307f1-eaf7-5c90-b0d6-ba5bce4066fe", 00:04:26.974 "assigned_rate_limits": { 00:04:26.974 "rw_ios_per_sec": 0, 00:04:26.974 "rw_mbytes_per_sec": 0, 00:04:26.974 "r_mbytes_per_sec": 0, 00:04:26.974 "w_mbytes_per_sec": 0 00:04:26.974 }, 00:04:26.974 "claimed": false, 00:04:26.974 "zoned": false, 00:04:26.974 "supported_io_types": { 00:04:26.974 "read": true, 00:04:26.974 "write": true, 00:04:26.974 "unmap": true, 00:04:26.974 "flush": true, 00:04:26.974 "reset": true, 00:04:26.974 "nvme_admin": false, 00:04:26.974 "nvme_io": false, 00:04:26.974 "nvme_io_md": false, 00:04:26.974 "write_zeroes": true, 00:04:26.974 "zcopy": true, 00:04:26.974 "get_zone_info": false, 00:04:26.974 "zone_management": false, 00:04:26.974 "zone_append": false, 00:04:26.974 "compare": false, 00:04:26.974 "compare_and_write": false, 00:04:26.974 "abort": true, 00:04:26.974 "seek_hole": false, 00:04:26.974 "seek_data": false, 00:04:26.974 "copy": true, 00:04:26.974 "nvme_iov_md": false 00:04:26.974 }, 00:04:26.974 "memory_domains": [ 00:04:26.974 { 00:04:26.974 "dma_device_id": "system", 00:04:26.975 "dma_device_type": 1 00:04:26.975 }, 00:04:26.975 { 00:04:26.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.975 "dma_device_type": 2 00:04:26.975 } 00:04:26.975 ], 00:04:26.975 "driver_specific": { 00:04:26.975 "passthru": { 00:04:26.975 "name": "Passthru0", 00:04:26.975 "base_bdev_name": "Malloc0" 00:04:26.975 } 00:04:26.975 } 00:04:26.975 } 00:04:26.975 ]' 00:04:26.975 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.975 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.975 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.975 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.975 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.975 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.975 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:26.975 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.975 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.975 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.975 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.975 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.975 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.975 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.975 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.975 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.233 ************************************ 00:04:27.233 END TEST rpc_integrity 00:04:27.233 ************************************ 00:04:27.233 09:16:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.233 00:04:27.233 real 0m0.352s 00:04:27.233 user 0m0.197s 00:04:27.233 sys 0m0.049s 00:04:27.233 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.233 09:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.233 09:16:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.233 09:16:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.233 09:16:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.233 09:16:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.233 ************************************ 00:04:27.233 START TEST rpc_plugins 00:04:27.233 ************************************ 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.233 { 00:04:27.233 "name": "Malloc1", 00:04:27.233 "aliases": [ 00:04:27.233 "1ff040e3-68aa-4aae-a155-213068002420" 00:04:27.233 ], 00:04:27.233 "product_name": "Malloc disk", 00:04:27.233 "block_size": 4096, 00:04:27.233 "num_blocks": 256, 00:04:27.233 "uuid": "1ff040e3-68aa-4aae-a155-213068002420", 00:04:27.233 "assigned_rate_limits": { 00:04:27.233 "rw_ios_per_sec": 0, 00:04:27.233 "rw_mbytes_per_sec": 0, 00:04:27.233 "r_mbytes_per_sec": 0, 00:04:27.233 "w_mbytes_per_sec": 0 00:04:27.233 }, 00:04:27.233 "claimed": false, 00:04:27.233 "zoned": false, 00:04:27.233 "supported_io_types": { 00:04:27.233 "read": true, 00:04:27.233 "write": true, 00:04:27.233 "unmap": true, 00:04:27.233 "flush": true, 00:04:27.233 "reset": true, 00:04:27.233 "nvme_admin": false, 00:04:27.233 "nvme_io": false, 00:04:27.233 "nvme_io_md": false, 00:04:27.233 "write_zeroes": true, 00:04:27.233 "zcopy": true, 00:04:27.233 "get_zone_info": false, 00:04:27.233 "zone_management": false, 00:04:27.233 "zone_append": false, 00:04:27.233 "compare": false, 00:04:27.233 "compare_and_write": false, 00:04:27.233 "abort": true, 00:04:27.233 "seek_hole": false, 00:04:27.233 "seek_data": false, 00:04:27.233 "copy": true, 00:04:27.233 "nvme_iov_md": false 00:04:27.233 }, 00:04:27.233 "memory_domains": [ 00:04:27.233 { 00:04:27.233 "dma_device_id": "system", 00:04:27.233 "dma_device_type": 1 00:04:27.233 }, 00:04:27.233 { 00:04:27.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.233 "dma_device_type": 2 00:04:27.233 } 00:04:27.233 ], 00:04:27.233 "driver_specific": {} 00:04:27.233 } 00:04:27.233 ]' 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.233 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.233 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.234 ************************************ 00:04:27.234 END TEST rpc_plugins 00:04:27.234 ************************************ 00:04:27.234 09:16:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.234 00:04:27.234 real 0m0.172s 00:04:27.234 user 0m0.105s 00:04:27.234 sys 0m0.020s 00:04:27.234 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.234 09:16:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.491 09:16:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.491 09:16:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.491 09:16:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.491 09:16:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.491 ************************************ 00:04:27.491 START TEST rpc_trace_cmd_test 00:04:27.491 ************************************ 00:04:27.491 09:16:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:27.491 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.491 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.491 09:16:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.491 09:16:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.491 09:16:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.491 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.491 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56996", 00:04:27.491 "tpoint_group_mask": "0x8", 00:04:27.491 "iscsi_conn": { 00:04:27.491 "mask": "0x2", 00:04:27.491 "tpoint_mask": "0x0" 00:04:27.491 }, 00:04:27.491 "scsi": { 00:04:27.491 "mask": "0x4", 00:04:27.491 "tpoint_mask": "0x0" 00:04:27.491 }, 00:04:27.491 "bdev": { 00:04:27.491 "mask": "0x8", 00:04:27.491 "tpoint_mask": "0xffffffffffffffff" 00:04:27.491 }, 00:04:27.491 "nvmf_rdma": { 00:04:27.492 "mask": "0x10", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "nvmf_tcp": { 00:04:27.492 "mask": "0x20", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "ftl": { 00:04:27.492 "mask": "0x40", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "blobfs": { 00:04:27.492 "mask": "0x80", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "dsa": { 00:04:27.492 "mask": "0x200", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "thread": { 00:04:27.492 "mask": "0x400", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "nvme_pcie": { 00:04:27.492 "mask": "0x800", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "iaa": { 00:04:27.492 "mask": "0x1000", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "nvme_tcp": { 00:04:27.492 "mask": "0x2000", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "bdev_nvme": { 00:04:27.492 "mask": "0x4000", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "sock": { 00:04:27.492 "mask": "0x8000", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "blob": { 00:04:27.492 "mask": "0x10000", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "bdev_raid": { 00:04:27.492 "mask": "0x20000", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 }, 00:04:27.492 "scheduler": { 00:04:27.492 "mask": "0x40000", 00:04:27.492 "tpoint_mask": "0x0" 00:04:27.492 } 00:04:27.492 }' 00:04:27.492 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.492 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:27.492 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.492 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.492 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.492 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.492 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.492 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.492 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:27.750 ************************************ 00:04:27.750 END TEST rpc_trace_cmd_test 00:04:27.750 ************************************ 00:04:27.750 09:16:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:27.750 00:04:27.750 real 0m0.251s 00:04:27.750 user 0m0.202s 00:04:27.750 sys 0m0.040s 00:04:27.750 09:16:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.750 09:16:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.750 09:16:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:27.750 09:16:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:27.750 09:16:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:27.750 09:16:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.750 09:16:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.750 09:16:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.750 ************************************ 00:04:27.750 START TEST rpc_daemon_integrity 00:04:27.750 ************************************ 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.750 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.750 { 00:04:27.750 "name": "Malloc2", 00:04:27.750 "aliases": [ 00:04:27.750 "305d2ca7-cb94-4854-ae31-5e7ee38da728" 00:04:27.750 ], 00:04:27.750 "product_name": "Malloc disk", 00:04:27.750 "block_size": 512, 00:04:27.750 "num_blocks": 16384, 00:04:27.750 "uuid": "305d2ca7-cb94-4854-ae31-5e7ee38da728", 00:04:27.750 "assigned_rate_limits": { 00:04:27.750 "rw_ios_per_sec": 0, 00:04:27.750 "rw_mbytes_per_sec": 0, 00:04:27.750 "r_mbytes_per_sec": 0, 00:04:27.750 "w_mbytes_per_sec": 0 00:04:27.750 }, 00:04:27.750 "claimed": false, 00:04:27.750 "zoned": false, 00:04:27.750 "supported_io_types": { 00:04:27.750 "read": true, 00:04:27.750 "write": true, 00:04:27.750 "unmap": true, 00:04:27.750 "flush": true, 00:04:27.750 "reset": true, 00:04:27.750 "nvme_admin": false, 00:04:27.750 "nvme_io": false, 00:04:27.750 "nvme_io_md": false, 00:04:27.750 "write_zeroes": true, 00:04:27.750 "zcopy": true, 00:04:27.750 "get_zone_info": false, 00:04:27.750 "zone_management": false, 00:04:27.750 "zone_append": false, 00:04:27.750 "compare": false, 00:04:27.750 "compare_and_write": false, 00:04:27.750 "abort": true, 00:04:27.750 "seek_hole": false, 00:04:27.750 "seek_data": false, 00:04:27.750 "copy": true, 00:04:27.750 "nvme_iov_md": false 00:04:27.750 }, 00:04:27.751 "memory_domains": [ 00:04:27.751 { 00:04:27.751 "dma_device_id": "system", 00:04:27.751 "dma_device_type": 1 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.751 "dma_device_type": 2 00:04:27.751 } 00:04:27.751 ], 00:04:27.751 "driver_specific": {} 00:04:27.751 } 00:04:27.751 ]' 00:04:27.751 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.751 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.751 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:27.751 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.751 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.751 [2024-11-20 09:16:53.167524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:27.751 [2024-11-20 09:16:53.167576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.751 [2024-11-20 09:16:53.167597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:27.751 [2024-11-20 09:16:53.167609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.751 [2024-11-20 09:16:53.169805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.751 [2024-11-20 09:16:53.169842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.751 Passthru0 00:04:27.751 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.751 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.751 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.751 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.009 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.009 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.009 { 00:04:28.009 "name": "Malloc2", 00:04:28.009 "aliases": [ 00:04:28.009 "305d2ca7-cb94-4854-ae31-5e7ee38da728" 00:04:28.009 ], 00:04:28.009 "product_name": "Malloc disk", 00:04:28.009 "block_size": 512, 00:04:28.009 "num_blocks": 16384, 00:04:28.009 "uuid": "305d2ca7-cb94-4854-ae31-5e7ee38da728", 00:04:28.009 "assigned_rate_limits": { 00:04:28.009 "rw_ios_per_sec": 0, 00:04:28.009 "rw_mbytes_per_sec": 0, 00:04:28.009 "r_mbytes_per_sec": 0, 00:04:28.009 "w_mbytes_per_sec": 0 00:04:28.009 }, 00:04:28.009 "claimed": true, 00:04:28.009 "claim_type": "exclusive_write", 00:04:28.009 "zoned": false, 00:04:28.009 "supported_io_types": { 00:04:28.009 "read": true, 00:04:28.009 "write": true, 00:04:28.009 "unmap": true, 00:04:28.009 "flush": true, 00:04:28.009 "reset": true, 00:04:28.009 "nvme_admin": false, 00:04:28.009 "nvme_io": false, 00:04:28.009 "nvme_io_md": false, 00:04:28.009 "write_zeroes": true, 00:04:28.009 "zcopy": true, 00:04:28.009 "get_zone_info": false, 00:04:28.009 "zone_management": false, 00:04:28.009 "zone_append": false, 00:04:28.009 "compare": false, 00:04:28.009 "compare_and_write": false, 00:04:28.009 "abort": true, 00:04:28.009 "seek_hole": false, 00:04:28.009 "seek_data": false, 00:04:28.009 "copy": true, 00:04:28.009 "nvme_iov_md": false 00:04:28.009 }, 00:04:28.009 "memory_domains": [ 00:04:28.009 { 00:04:28.009 "dma_device_id": "system", 00:04:28.009 "dma_device_type": 1 00:04:28.009 }, 00:04:28.009 { 00:04:28.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.009 "dma_device_type": 2 00:04:28.009 } 00:04:28.009 ], 00:04:28.009 "driver_specific": {} 00:04:28.009 }, 00:04:28.009 { 00:04:28.009 "name": "Passthru0", 00:04:28.009 "aliases": [ 00:04:28.009 "45eb474c-4f85-5c18-8660-3eaba767a25f" 00:04:28.009 ], 00:04:28.009 "product_name": "passthru", 00:04:28.009 "block_size": 512, 00:04:28.009 "num_blocks": 16384, 00:04:28.009 "uuid": "45eb474c-4f85-5c18-8660-3eaba767a25f", 00:04:28.009 "assigned_rate_limits": { 00:04:28.009 "rw_ios_per_sec": 0, 00:04:28.009 "rw_mbytes_per_sec": 0, 00:04:28.009 "r_mbytes_per_sec": 0, 00:04:28.009 "w_mbytes_per_sec": 0 00:04:28.009 }, 00:04:28.009 "claimed": false, 00:04:28.009 "zoned": false, 00:04:28.009 "supported_io_types": { 00:04:28.009 "read": true, 00:04:28.009 "write": true, 00:04:28.009 "unmap": true, 00:04:28.009 "flush": true, 00:04:28.009 "reset": true, 00:04:28.009 "nvme_admin": false, 00:04:28.009 "nvme_io": false, 00:04:28.009 "nvme_io_md": false, 00:04:28.009 "write_zeroes": true, 00:04:28.009 "zcopy": true, 00:04:28.009 "get_zone_info": false, 00:04:28.009 "zone_management": false, 00:04:28.009 "zone_append": false, 00:04:28.009 "compare": false, 00:04:28.009 "compare_and_write": false, 00:04:28.009 "abort": true, 00:04:28.009 "seek_hole": false, 00:04:28.009 "seek_data": false, 00:04:28.009 "copy": true, 00:04:28.009 "nvme_iov_md": false 00:04:28.009 }, 00:04:28.009 "memory_domains": [ 00:04:28.009 { 00:04:28.009 "dma_device_id": "system", 00:04:28.009 "dma_device_type": 1 00:04:28.010 }, 00:04:28.010 { 00:04:28.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.010 "dma_device_type": 2 00:04:28.010 } 00:04:28.010 ], 00:04:28.010 "driver_specific": { 00:04:28.010 "passthru": { 00:04:28.010 "name": "Passthru0", 00:04:28.010 "base_bdev_name": "Malloc2" 00:04:28.010 } 00:04:28.010 } 00:04:28.010 } 00:04:28.010 ]' 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.010 ************************************ 00:04:28.010 END TEST rpc_daemon_integrity 00:04:28.010 ************************************ 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.010 00:04:28.010 real 0m0.324s 00:04:28.010 user 0m0.165s 00:04:28.010 sys 0m0.060s 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.010 09:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.010 09:16:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.010 09:16:53 rpc -- rpc/rpc.sh@84 -- # killprocess 56996 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@954 -- # '[' -z 56996 ']' 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@958 -- # kill -0 56996 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56996 00:04:28.010 killing process with pid 56996 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56996' 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@973 -- # kill 56996 00:04:28.010 09:16:53 rpc -- common/autotest_common.sh@978 -- # wait 56996 00:04:30.539 ************************************ 00:04:30.539 END TEST rpc 00:04:30.539 ************************************ 00:04:30.539 00:04:30.539 real 0m5.307s 00:04:30.539 user 0m5.855s 00:04:30.539 sys 0m0.894s 00:04:30.539 09:16:55 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.539 09:16:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.539 09:16:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:30.539 09:16:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.539 09:16:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.539 09:16:55 -- common/autotest_common.sh@10 -- # set +x 00:04:30.539 ************************************ 00:04:30.539 START TEST skip_rpc 00:04:30.539 ************************************ 00:04:30.539 09:16:55 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:30.799 * Looking for test storage... 00:04:30.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.799 09:16:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.799 --rc genhtml_branch_coverage=1 00:04:30.799 --rc genhtml_function_coverage=1 00:04:30.799 --rc genhtml_legend=1 00:04:30.799 --rc geninfo_all_blocks=1 00:04:30.799 --rc geninfo_unexecuted_blocks=1 00:04:30.799 00:04:30.799 ' 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.799 --rc genhtml_branch_coverage=1 00:04:30.799 --rc genhtml_function_coverage=1 00:04:30.799 --rc genhtml_legend=1 00:04:30.799 --rc geninfo_all_blocks=1 00:04:30.799 --rc geninfo_unexecuted_blocks=1 00:04:30.799 00:04:30.799 ' 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.799 --rc genhtml_branch_coverage=1 00:04:30.799 --rc genhtml_function_coverage=1 00:04:30.799 --rc genhtml_legend=1 00:04:30.799 --rc geninfo_all_blocks=1 00:04:30.799 --rc geninfo_unexecuted_blocks=1 00:04:30.799 00:04:30.799 ' 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.799 --rc genhtml_branch_coverage=1 00:04:30.799 --rc genhtml_function_coverage=1 00:04:30.799 --rc genhtml_legend=1 00:04:30.799 --rc geninfo_all_blocks=1 00:04:30.799 --rc geninfo_unexecuted_blocks=1 00:04:30.799 00:04:30.799 ' 00:04:30.799 09:16:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.799 09:16:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:30.799 09:16:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.799 09:16:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.799 ************************************ 00:04:30.799 START TEST skip_rpc 00:04:30.799 ************************************ 00:04:30.799 09:16:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:30.799 09:16:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57231 00:04:30.799 09:16:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:30.799 09:16:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.799 09:16:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:30.799 [2024-11-20 09:16:56.245346] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:30.799 [2024-11-20 09:16:56.245484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57231 ] 00:04:31.058 [2024-11-20 09:16:56.403542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.317 [2024-11-20 09:16:56.519959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57231 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57231 ']' 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57231 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57231 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.587 killing process with pid 57231 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57231' 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57231 00:04:36.587 09:17:01 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57231 00:04:38.498 00:04:38.498 real 0m7.488s 00:04:38.498 user 0m7.029s 00:04:38.498 sys 0m0.380s 00:04:38.498 09:17:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.498 09:17:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.498 ************************************ 00:04:38.498 END TEST skip_rpc 00:04:38.498 ************************************ 00:04:38.498 09:17:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:38.498 09:17:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.498 09:17:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.498 09:17:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.498 ************************************ 00:04:38.498 START TEST skip_rpc_with_json 00:04:38.498 ************************************ 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57335 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57335 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57335 ']' 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.498 09:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.498 [2024-11-20 09:17:03.788209] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:38.498 [2024-11-20 09:17:03.788354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57335 ] 00:04:38.757 [2024-11-20 09:17:03.964009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.757 [2024-11-20 09:17:04.075471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.704 [2024-11-20 09:17:04.944483] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:39.704 request: 00:04:39.704 { 00:04:39.704 "trtype": "tcp", 00:04:39.704 "method": "nvmf_get_transports", 00:04:39.704 "req_id": 1 00:04:39.704 } 00:04:39.704 Got JSON-RPC error response 00:04:39.704 response: 00:04:39.704 { 00:04:39.704 "code": -19, 00:04:39.704 "message": "No such device" 00:04:39.704 } 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.704 [2024-11-20 09:17:04.956566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.704 09:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.704 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.704 09:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:39.704 { 00:04:39.704 "subsystems": [ 00:04:39.704 { 00:04:39.704 "subsystem": "fsdev", 00:04:39.704 "config": [ 00:04:39.704 { 00:04:39.704 "method": "fsdev_set_opts", 00:04:39.704 "params": { 00:04:39.704 "fsdev_io_pool_size": 65535, 00:04:39.704 "fsdev_io_cache_size": 256 00:04:39.704 } 00:04:39.704 } 00:04:39.704 ] 00:04:39.704 }, 00:04:39.704 { 00:04:39.704 "subsystem": "keyring", 00:04:39.704 "config": [] 00:04:39.704 }, 00:04:39.704 { 00:04:39.704 "subsystem": "iobuf", 00:04:39.704 "config": [ 00:04:39.704 { 00:04:39.704 "method": "iobuf_set_options", 00:04:39.704 "params": { 00:04:39.704 "small_pool_count": 8192, 00:04:39.704 "large_pool_count": 1024, 00:04:39.704 "small_bufsize": 8192, 00:04:39.704 "large_bufsize": 135168, 00:04:39.704 "enable_numa": false 00:04:39.704 } 00:04:39.704 } 00:04:39.704 ] 00:04:39.704 }, 00:04:39.704 { 00:04:39.704 "subsystem": "sock", 00:04:39.704 "config": [ 00:04:39.704 { 00:04:39.704 "method": "sock_set_default_impl", 00:04:39.704 "params": { 00:04:39.704 "impl_name": "posix" 00:04:39.704 } 00:04:39.704 }, 00:04:39.704 { 00:04:39.704 "method": "sock_impl_set_options", 00:04:39.704 "params": { 00:04:39.704 "impl_name": "ssl", 00:04:39.704 "recv_buf_size": 4096, 00:04:39.704 "send_buf_size": 4096, 00:04:39.704 "enable_recv_pipe": true, 00:04:39.704 "enable_quickack": false, 00:04:39.704 "enable_placement_id": 0, 00:04:39.704 "enable_zerocopy_send_server": true, 00:04:39.704 "enable_zerocopy_send_client": false, 00:04:39.704 "zerocopy_threshold": 0, 00:04:39.704 "tls_version": 0, 00:04:39.704 "enable_ktls": false 00:04:39.704 } 00:04:39.704 }, 00:04:39.704 { 00:04:39.704 "method": "sock_impl_set_options", 00:04:39.704 "params": { 00:04:39.704 "impl_name": "posix", 00:04:39.704 "recv_buf_size": 2097152, 00:04:39.704 "send_buf_size": 2097152, 00:04:39.704 "enable_recv_pipe": true, 00:04:39.705 "enable_quickack": false, 00:04:39.705 "enable_placement_id": 0, 00:04:39.705 "enable_zerocopy_send_server": true, 00:04:39.705 "enable_zerocopy_send_client": false, 00:04:39.705 "zerocopy_threshold": 0, 00:04:39.705 "tls_version": 0, 00:04:39.705 "enable_ktls": false 00:04:39.705 } 00:04:39.705 } 00:04:39.705 ] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "vmd", 00:04:39.705 "config": [] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "accel", 00:04:39.705 "config": [ 00:04:39.705 { 00:04:39.705 "method": "accel_set_options", 00:04:39.705 "params": { 00:04:39.705 "small_cache_size": 128, 00:04:39.705 "large_cache_size": 16, 00:04:39.705 "task_count": 2048, 00:04:39.705 "sequence_count": 2048, 00:04:39.705 "buf_count": 2048 00:04:39.705 } 00:04:39.705 } 00:04:39.705 ] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "bdev", 00:04:39.705 "config": [ 00:04:39.705 { 00:04:39.705 "method": "bdev_set_options", 00:04:39.705 "params": { 00:04:39.705 "bdev_io_pool_size": 65535, 00:04:39.705 "bdev_io_cache_size": 256, 00:04:39.705 "bdev_auto_examine": true, 00:04:39.705 "iobuf_small_cache_size": 128, 00:04:39.705 "iobuf_large_cache_size": 16 00:04:39.705 } 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "method": "bdev_raid_set_options", 00:04:39.705 "params": { 00:04:39.705 "process_window_size_kb": 1024, 00:04:39.705 "process_max_bandwidth_mb_sec": 0 00:04:39.705 } 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "method": "bdev_iscsi_set_options", 00:04:39.705 "params": { 00:04:39.705 "timeout_sec": 30 00:04:39.705 } 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "method": "bdev_nvme_set_options", 00:04:39.705 "params": { 00:04:39.705 "action_on_timeout": "none", 00:04:39.705 "timeout_us": 0, 00:04:39.705 "timeout_admin_us": 0, 00:04:39.705 "keep_alive_timeout_ms": 10000, 00:04:39.705 "arbitration_burst": 0, 00:04:39.705 "low_priority_weight": 0, 00:04:39.705 "medium_priority_weight": 0, 00:04:39.705 "high_priority_weight": 0, 00:04:39.705 "nvme_adminq_poll_period_us": 10000, 00:04:39.705 "nvme_ioq_poll_period_us": 0, 00:04:39.705 "io_queue_requests": 0, 00:04:39.705 "delay_cmd_submit": true, 00:04:39.705 "transport_retry_count": 4, 00:04:39.705 "bdev_retry_count": 3, 00:04:39.705 "transport_ack_timeout": 0, 00:04:39.705 "ctrlr_loss_timeout_sec": 0, 00:04:39.705 "reconnect_delay_sec": 0, 00:04:39.705 "fast_io_fail_timeout_sec": 0, 00:04:39.705 "disable_auto_failback": false, 00:04:39.705 "generate_uuids": false, 00:04:39.705 "transport_tos": 0, 00:04:39.705 "nvme_error_stat": false, 00:04:39.705 "rdma_srq_size": 0, 00:04:39.705 "io_path_stat": false, 00:04:39.705 "allow_accel_sequence": false, 00:04:39.705 "rdma_max_cq_size": 0, 00:04:39.705 "rdma_cm_event_timeout_ms": 0, 00:04:39.705 "dhchap_digests": [ 00:04:39.705 "sha256", 00:04:39.705 "sha384", 00:04:39.705 "sha512" 00:04:39.705 ], 00:04:39.705 "dhchap_dhgroups": [ 00:04:39.705 "null", 00:04:39.705 "ffdhe2048", 00:04:39.705 "ffdhe3072", 00:04:39.705 "ffdhe4096", 00:04:39.705 "ffdhe6144", 00:04:39.705 "ffdhe8192" 00:04:39.705 ] 00:04:39.705 } 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "method": "bdev_nvme_set_hotplug", 00:04:39.705 "params": { 00:04:39.705 "period_us": 100000, 00:04:39.705 "enable": false 00:04:39.705 } 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "method": "bdev_wait_for_examine" 00:04:39.705 } 00:04:39.705 ] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "scsi", 00:04:39.705 "config": null 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "scheduler", 00:04:39.705 "config": [ 00:04:39.705 { 00:04:39.705 "method": "framework_set_scheduler", 00:04:39.705 "params": { 00:04:39.705 "name": "static" 00:04:39.705 } 00:04:39.705 } 00:04:39.705 ] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "vhost_scsi", 00:04:39.705 "config": [] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "vhost_blk", 00:04:39.705 "config": [] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "ublk", 00:04:39.705 "config": [] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "nbd", 00:04:39.705 "config": [] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "nvmf", 00:04:39.705 "config": [ 00:04:39.705 { 00:04:39.705 "method": "nvmf_set_config", 00:04:39.705 "params": { 00:04:39.705 "discovery_filter": "match_any", 00:04:39.705 "admin_cmd_passthru": { 00:04:39.705 "identify_ctrlr": false 00:04:39.705 }, 00:04:39.705 "dhchap_digests": [ 00:04:39.705 "sha256", 00:04:39.705 "sha384", 00:04:39.705 "sha512" 00:04:39.705 ], 00:04:39.705 "dhchap_dhgroups": [ 00:04:39.705 "null", 00:04:39.705 "ffdhe2048", 00:04:39.705 "ffdhe3072", 00:04:39.705 "ffdhe4096", 00:04:39.705 "ffdhe6144", 00:04:39.705 "ffdhe8192" 00:04:39.705 ] 00:04:39.705 } 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "method": "nvmf_set_max_subsystems", 00:04:39.705 "params": { 00:04:39.705 "max_subsystems": 1024 00:04:39.705 } 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "method": "nvmf_set_crdt", 00:04:39.705 "params": { 00:04:39.705 "crdt1": 0, 00:04:39.705 "crdt2": 0, 00:04:39.705 "crdt3": 0 00:04:39.705 } 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "method": "nvmf_create_transport", 00:04:39.705 "params": { 00:04:39.705 "trtype": "TCP", 00:04:39.705 "max_queue_depth": 128, 00:04:39.705 "max_io_qpairs_per_ctrlr": 127, 00:04:39.705 "in_capsule_data_size": 4096, 00:04:39.705 "max_io_size": 131072, 00:04:39.705 "io_unit_size": 131072, 00:04:39.705 "max_aq_depth": 128, 00:04:39.705 "num_shared_buffers": 511, 00:04:39.705 "buf_cache_size": 4294967295, 00:04:39.705 "dif_insert_or_strip": false, 00:04:39.705 "zcopy": false, 00:04:39.705 "c2h_success": true, 00:04:39.705 "sock_priority": 0, 00:04:39.705 "abort_timeout_sec": 1, 00:04:39.705 "ack_timeout": 0, 00:04:39.705 "data_wr_pool_size": 0 00:04:39.705 } 00:04:39.705 } 00:04:39.705 ] 00:04:39.705 }, 00:04:39.705 { 00:04:39.705 "subsystem": "iscsi", 00:04:39.705 "config": [ 00:04:39.705 { 00:04:39.705 "method": "iscsi_set_options", 00:04:39.705 "params": { 00:04:39.705 "node_base": "iqn.2016-06.io.spdk", 00:04:39.705 "max_sessions": 128, 00:04:39.705 "max_connections_per_session": 2, 00:04:39.705 "max_queue_depth": 64, 00:04:39.705 "default_time2wait": 2, 00:04:39.705 "default_time2retain": 20, 00:04:39.705 "first_burst_length": 8192, 00:04:39.705 "immediate_data": true, 00:04:39.705 "allow_duplicated_isid": false, 00:04:39.705 "error_recovery_level": 0, 00:04:39.705 "nop_timeout": 60, 00:04:39.705 "nop_in_interval": 30, 00:04:39.705 "disable_chap": false, 00:04:39.705 "require_chap": false, 00:04:39.705 "mutual_chap": false, 00:04:39.705 "chap_group": 0, 00:04:39.705 "max_large_datain_per_connection": 64, 00:04:39.705 "max_r2t_per_connection": 4, 00:04:39.705 "pdu_pool_size": 36864, 00:04:39.705 "immediate_data_pool_size": 16384, 00:04:39.705 "data_out_pool_size": 2048 00:04:39.705 } 00:04:39.705 } 00:04:39.705 ] 00:04:39.705 } 00:04:39.705 ] 00:04:39.705 } 00:04:39.705 09:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:39.705 09:17:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57335 00:04:39.705 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57335 ']' 00:04:39.705 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57335 00:04:39.705 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:39.705 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.705 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57335 00:04:39.965 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.965 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.965 killing process with pid 57335 00:04:39.965 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57335' 00:04:39.965 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57335 00:04:39.965 09:17:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57335 00:04:42.534 09:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57391 00:04:42.534 09:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.534 09:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57391 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57391 ']' 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57391 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57391 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57391' 00:04:47.807 killing process with pid 57391 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57391 00:04:47.807 09:17:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57391 00:04:49.706 09:17:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:49.706 09:17:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:49.963 00:04:49.963 real 0m11.469s 00:04:49.963 user 0m10.967s 00:04:49.963 sys 0m0.843s 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.963 ************************************ 00:04:49.963 END TEST skip_rpc_with_json 00:04:49.963 ************************************ 00:04:49.963 09:17:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:49.963 09:17:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.963 09:17:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.963 09:17:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.963 ************************************ 00:04:49.963 START TEST skip_rpc_with_delay 00:04:49.963 ************************************ 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.963 [2024-11-20 09:17:15.333019] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.963 00:04:49.963 real 0m0.171s 00:04:49.963 user 0m0.096s 00:04:49.963 sys 0m0.074s 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.963 09:17:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:49.963 ************************************ 00:04:49.963 END TEST skip_rpc_with_delay 00:04:49.963 ************************************ 00:04:50.221 09:17:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:50.221 09:17:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:50.221 09:17:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:50.221 09:17:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.221 09:17:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.221 09:17:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.221 ************************************ 00:04:50.221 START TEST exit_on_failed_rpc_init 00:04:50.221 ************************************ 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57519 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57519 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57519 ']' 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.221 09:17:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.221 [2024-11-20 09:17:15.563248] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:50.221 [2024-11-20 09:17:15.563386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57519 ] 00:04:50.478 [2024-11-20 09:17:15.742237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.478 [2024-11-20 09:17:15.870842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.410 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.411 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.411 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.411 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:51.411 09:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.669 [2024-11-20 09:17:16.877560] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:51.669 [2024-11-20 09:17:16.878052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57543 ] 00:04:51.669 [2024-11-20 09:17:17.052467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.926 [2024-11-20 09:17:17.177274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.926 [2024-11-20 09:17:17.177388] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:51.926 [2024-11-20 09:17:17.177402] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:51.926 [2024-11-20 09:17:17.177420] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57519 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57519 ']' 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57519 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57519 00:04:52.190 killing process with pid 57519 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57519' 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57519 00:04:52.190 09:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57519 00:04:54.716 00:04:54.716 real 0m4.480s 00:04:54.716 user 0m4.877s 00:04:54.716 sys 0m0.550s 00:04:54.716 09:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.716 09:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.716 ************************************ 00:04:54.716 END TEST exit_on_failed_rpc_init 00:04:54.716 ************************************ 00:04:54.716 09:17:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.716 00:04:54.716 real 0m24.062s 00:04:54.716 user 0m23.180s 00:04:54.716 sys 0m2.111s 00:04:54.716 09:17:19 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.716 09:17:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.716 ************************************ 00:04:54.716 END TEST skip_rpc 00:04:54.716 ************************************ 00:04:54.716 09:17:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:54.716 09:17:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.716 09:17:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.716 09:17:20 -- common/autotest_common.sh@10 -- # set +x 00:04:54.716 ************************************ 00:04:54.716 START TEST rpc_client 00:04:54.716 ************************************ 00:04:54.716 09:17:20 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:54.716 * Looking for test storage... 00:04:54.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:54.716 09:17:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.716 09:17:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.716 09:17:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.973 09:17:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:54.973 09:17:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:54.974 09:17:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.974 09:17:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:54.974 09:17:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.974 09:17:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.974 09:17:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.974 09:17:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:54.974 09:17:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.974 09:17:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.974 --rc genhtml_branch_coverage=1 00:04:54.974 --rc genhtml_function_coverage=1 00:04:54.974 --rc genhtml_legend=1 00:04:54.974 --rc geninfo_all_blocks=1 00:04:54.974 --rc geninfo_unexecuted_blocks=1 00:04:54.974 00:04:54.974 ' 00:04:54.974 09:17:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.974 --rc genhtml_branch_coverage=1 00:04:54.974 --rc genhtml_function_coverage=1 00:04:54.974 --rc genhtml_legend=1 00:04:54.974 --rc geninfo_all_blocks=1 00:04:54.974 --rc geninfo_unexecuted_blocks=1 00:04:54.974 00:04:54.974 ' 00:04:54.974 09:17:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.974 --rc genhtml_branch_coverage=1 00:04:54.974 --rc genhtml_function_coverage=1 00:04:54.974 --rc genhtml_legend=1 00:04:54.974 --rc geninfo_all_blocks=1 00:04:54.974 --rc geninfo_unexecuted_blocks=1 00:04:54.974 00:04:54.974 ' 00:04:54.974 09:17:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.974 --rc genhtml_branch_coverage=1 00:04:54.974 --rc genhtml_function_coverage=1 00:04:54.974 --rc genhtml_legend=1 00:04:54.974 --rc geninfo_all_blocks=1 00:04:54.974 --rc geninfo_unexecuted_blocks=1 00:04:54.974 00:04:54.974 ' 00:04:54.974 09:17:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:54.974 OK 00:04:54.974 09:17:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:54.974 00:04:54.974 real 0m0.233s 00:04:54.974 user 0m0.115s 00:04:54.974 sys 0m0.136s 00:04:54.974 09:17:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.974 09:17:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:54.974 ************************************ 00:04:54.974 END TEST rpc_client 00:04:54.974 ************************************ 00:04:54.974 09:17:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:54.974 09:17:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.974 09:17:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.974 09:17:20 -- common/autotest_common.sh@10 -- # set +x 00:04:54.974 ************************************ 00:04:54.974 START TEST json_config 00:04:54.974 ************************************ 00:04:54.974 09:17:20 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:54.974 09:17:20 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.974 09:17:20 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.974 09:17:20 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.232 09:17:20 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.232 09:17:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.232 09:17:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.232 09:17:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.232 09:17:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.232 09:17:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.232 09:17:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.232 09:17:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.232 09:17:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.232 09:17:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.232 09:17:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.232 09:17:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.232 09:17:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:55.232 09:17:20 json_config -- scripts/common.sh@345 -- # : 1 00:04:55.232 09:17:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.232 09:17:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.232 09:17:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:55.232 09:17:20 json_config -- scripts/common.sh@353 -- # local d=1 00:04:55.232 09:17:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.232 09:17:20 json_config -- scripts/common.sh@355 -- # echo 1 00:04:55.232 09:17:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.232 09:17:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:55.232 09:17:20 json_config -- scripts/common.sh@353 -- # local d=2 00:04:55.232 09:17:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.232 09:17:20 json_config -- scripts/common.sh@355 -- # echo 2 00:04:55.232 09:17:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.232 09:17:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.232 09:17:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.232 09:17:20 json_config -- scripts/common.sh@368 -- # return 0 00:04:55.232 09:17:20 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.232 09:17:20 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.232 --rc genhtml_branch_coverage=1 00:04:55.232 --rc genhtml_function_coverage=1 00:04:55.232 --rc genhtml_legend=1 00:04:55.232 --rc geninfo_all_blocks=1 00:04:55.232 --rc geninfo_unexecuted_blocks=1 00:04:55.232 00:04:55.232 ' 00:04:55.232 09:17:20 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.232 --rc genhtml_branch_coverage=1 00:04:55.232 --rc genhtml_function_coverage=1 00:04:55.232 --rc genhtml_legend=1 00:04:55.232 --rc geninfo_all_blocks=1 00:04:55.232 --rc geninfo_unexecuted_blocks=1 00:04:55.232 00:04:55.232 ' 00:04:55.232 09:17:20 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.232 --rc genhtml_branch_coverage=1 00:04:55.232 --rc genhtml_function_coverage=1 00:04:55.232 --rc genhtml_legend=1 00:04:55.232 --rc geninfo_all_blocks=1 00:04:55.232 --rc geninfo_unexecuted_blocks=1 00:04:55.232 00:04:55.232 ' 00:04:55.232 09:17:20 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.232 --rc genhtml_branch_coverage=1 00:04:55.232 --rc genhtml_function_coverage=1 00:04:55.232 --rc genhtml_legend=1 00:04:55.232 --rc geninfo_all_blocks=1 00:04:55.232 --rc geninfo_unexecuted_blocks=1 00:04:55.232 00:04:55.232 ' 00:04:55.232 09:17:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:440f9e5a-a2c8-4aa6-8016-1a270cad7677 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=440f9e5a-a2c8-4aa6-8016-1a270cad7677 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.232 09:17:20 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.232 09:17:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.232 09:17:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.232 09:17:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.232 09:17:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.232 09:17:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.232 09:17:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.233 09:17:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.233 09:17:20 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.233 09:17:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:04:55.233 09:17:20 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:55.233 09:17:20 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:55.233 09:17:20 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@50 -- # : 0 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:55.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:55.233 09:17:20 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:55.233 09:17:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:55.233 09:17:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.233 09:17:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.233 09:17:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.233 09:17:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.233 WARNING: No tests are enabled so not running JSON configuration tests 00:04:55.233 09:17:20 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:55.233 09:17:20 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:55.233 00:04:55.233 real 0m0.206s 00:04:55.233 user 0m0.140s 00:04:55.233 sys 0m0.071s 00:04:55.233 09:17:20 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.233 09:17:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.233 ************************************ 00:04:55.233 END TEST json_config 00:04:55.233 ************************************ 00:04:55.233 09:17:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.233 09:17:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.233 09:17:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.233 09:17:20 -- common/autotest_common.sh@10 -- # set +x 00:04:55.233 ************************************ 00:04:55.233 START TEST json_config_extra_key 00:04:55.233 ************************************ 00:04:55.233 09:17:20 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.233 09:17:20 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.233 09:17:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.233 09:17:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.491 09:17:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:55.491 09:17:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.492 --rc genhtml_branch_coverage=1 00:04:55.492 --rc genhtml_function_coverage=1 00:04:55.492 --rc genhtml_legend=1 00:04:55.492 --rc geninfo_all_blocks=1 00:04:55.492 --rc geninfo_unexecuted_blocks=1 00:04:55.492 00:04:55.492 ' 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.492 --rc genhtml_branch_coverage=1 00:04:55.492 --rc genhtml_function_coverage=1 00:04:55.492 --rc genhtml_legend=1 00:04:55.492 --rc geninfo_all_blocks=1 00:04:55.492 --rc geninfo_unexecuted_blocks=1 00:04:55.492 00:04:55.492 ' 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.492 --rc genhtml_branch_coverage=1 00:04:55.492 --rc genhtml_function_coverage=1 00:04:55.492 --rc genhtml_legend=1 00:04:55.492 --rc geninfo_all_blocks=1 00:04:55.492 --rc geninfo_unexecuted_blocks=1 00:04:55.492 00:04:55.492 ' 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.492 --rc genhtml_branch_coverage=1 00:04:55.492 --rc genhtml_function_coverage=1 00:04:55.492 --rc genhtml_legend=1 00:04:55.492 --rc geninfo_all_blocks=1 00:04:55.492 --rc geninfo_unexecuted_blocks=1 00:04:55.492 00:04:55.492 ' 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:440f9e5a-a2c8-4aa6-8016-1a270cad7677 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=440f9e5a-a2c8-4aa6-8016-1a270cad7677 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.492 09:17:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.492 09:17:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.492 09:17:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.492 09:17:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.492 09:17:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:55.492 09:17:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:55.492 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:55.492 09:17:20 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.492 INFO: launching applications... 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:55.492 09:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57753 00:04:55.492 Waiting for target to run... 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57753 /var/tmp/spdk_tgt.sock 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57753 ']' 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.492 09:17:20 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.492 09:17:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.492 [2024-11-20 09:17:20.873048] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:04:55.492 [2024-11-20 09:17:20.873199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57753 ] 00:04:56.057 [2024-11-20 09:17:21.268042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.057 [2024-11-20 09:17:21.373322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.996 09:17:22 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.996 09:17:22 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:56.996 00:04:56.996 INFO: shutting down applications... 00:04:56.996 09:17:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:56.996 09:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:56.996 09:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:56.996 09:17:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:56.996 09:17:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.996 09:17:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57753 ]] 00:04:56.996 09:17:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57753 00:04:56.996 09:17:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.996 09:17:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.996 09:17:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57753 00:04:56.996 09:17:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.263 09:17:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.263 09:17:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.263 09:17:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57753 00:04:57.263 09:17:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.827 09:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.827 09:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.827 09:17:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57753 00:04:57.827 09:17:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.394 09:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.394 09:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.394 09:17:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57753 00:04:58.394 09:17:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.962 09:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.962 09:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.962 09:17:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57753 00:04:58.962 09:17:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.221 09:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.221 09:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.221 09:17:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57753 00:04:59.221 09:17:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.788 09:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.788 09:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.788 09:17:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57753 00:04:59.788 09:17:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:59.788 09:17:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:59.788 09:17:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:59.788 SPDK target shutdown done 00:04:59.788 09:17:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:59.788 Success 00:04:59.788 09:17:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:59.788 00:04:59.788 real 0m4.535s 00:04:59.788 user 0m4.088s 00:04:59.788 sys 0m0.564s 00:04:59.788 09:17:25 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.788 09:17:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.788 ************************************ 00:04:59.788 END TEST json_config_extra_key 00:04:59.788 ************************************ 00:04:59.788 09:17:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.788 09:17:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.788 09:17:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.788 09:17:25 -- common/autotest_common.sh@10 -- # set +x 00:04:59.788 ************************************ 00:04:59.788 START TEST alias_rpc 00:04:59.788 ************************************ 00:04:59.788 09:17:25 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.047 * Looking for test storage... 00:05:00.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.047 09:17:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.047 --rc genhtml_branch_coverage=1 00:05:00.047 --rc genhtml_function_coverage=1 00:05:00.047 --rc genhtml_legend=1 00:05:00.047 --rc geninfo_all_blocks=1 00:05:00.047 --rc geninfo_unexecuted_blocks=1 00:05:00.047 00:05:00.047 ' 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.047 --rc genhtml_branch_coverage=1 00:05:00.047 --rc genhtml_function_coverage=1 00:05:00.047 --rc genhtml_legend=1 00:05:00.047 --rc geninfo_all_blocks=1 00:05:00.047 --rc geninfo_unexecuted_blocks=1 00:05:00.047 00:05:00.047 ' 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.047 --rc genhtml_branch_coverage=1 00:05:00.047 --rc genhtml_function_coverage=1 00:05:00.047 --rc genhtml_legend=1 00:05:00.047 --rc geninfo_all_blocks=1 00:05:00.047 --rc geninfo_unexecuted_blocks=1 00:05:00.047 00:05:00.047 ' 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.047 --rc genhtml_branch_coverage=1 00:05:00.047 --rc genhtml_function_coverage=1 00:05:00.047 --rc genhtml_legend=1 00:05:00.047 --rc geninfo_all_blocks=1 00:05:00.047 --rc geninfo_unexecuted_blocks=1 00:05:00.047 00:05:00.047 ' 00:05:00.047 09:17:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.047 09:17:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.047 09:17:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57864 00:05:00.047 09:17:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57864 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57864 ']' 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.047 09:17:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.047 [2024-11-20 09:17:25.465130] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:00.047 [2024-11-20 09:17:25.465264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57864 ] 00:05:00.306 [2024-11-20 09:17:25.640301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.306 [2024-11-20 09:17:25.759168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.242 09:17:26 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.242 09:17:26 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:01.242 09:17:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:01.501 09:17:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57864 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57864 ']' 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57864 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57864 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57864' 00:05:01.501 killing process with pid 57864 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 57864 00:05:01.501 09:17:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 57864 00:05:04.030 00:05:04.030 real 0m4.109s 00:05:04.030 user 0m4.125s 00:05:04.030 sys 0m0.557s 00:05:04.030 09:17:29 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.030 09:17:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.030 ************************************ 00:05:04.030 END TEST alias_rpc 00:05:04.030 ************************************ 00:05:04.030 09:17:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:04.030 09:17:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.030 09:17:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.030 09:17:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.030 09:17:29 -- common/autotest_common.sh@10 -- # set +x 00:05:04.030 ************************************ 00:05:04.030 START TEST spdkcli_tcp 00:05:04.030 ************************************ 00:05:04.030 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.030 * Looking for test storage... 00:05:04.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:04.030 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.030 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.030 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.288 09:17:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.288 --rc genhtml_branch_coverage=1 00:05:04.288 --rc genhtml_function_coverage=1 00:05:04.288 --rc genhtml_legend=1 00:05:04.288 --rc geninfo_all_blocks=1 00:05:04.288 --rc geninfo_unexecuted_blocks=1 00:05:04.288 00:05:04.288 ' 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.288 --rc genhtml_branch_coverage=1 00:05:04.288 --rc genhtml_function_coverage=1 00:05:04.288 --rc genhtml_legend=1 00:05:04.288 --rc geninfo_all_blocks=1 00:05:04.288 --rc geninfo_unexecuted_blocks=1 00:05:04.288 00:05:04.288 ' 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.288 --rc genhtml_branch_coverage=1 00:05:04.288 --rc genhtml_function_coverage=1 00:05:04.288 --rc genhtml_legend=1 00:05:04.288 --rc geninfo_all_blocks=1 00:05:04.288 --rc geninfo_unexecuted_blocks=1 00:05:04.288 00:05:04.288 ' 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.288 --rc genhtml_branch_coverage=1 00:05:04.288 --rc genhtml_function_coverage=1 00:05:04.288 --rc genhtml_legend=1 00:05:04.288 --rc geninfo_all_blocks=1 00:05:04.288 --rc geninfo_unexecuted_blocks=1 00:05:04.288 00:05:04.288 ' 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57971 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:04.288 09:17:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57971 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57971 ']' 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.288 09:17:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.288 [2024-11-20 09:17:29.671219] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:04.288 [2024-11-20 09:17:29.671796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57971 ] 00:05:04.546 [2024-11-20 09:17:29.837489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.546 [2024-11-20 09:17:29.954922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.546 [2024-11-20 09:17:29.954959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.478 09:17:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.478 09:17:30 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:05.478 09:17:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:05.478 09:17:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57988 00:05:05.478 09:17:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:05.736 [ 00:05:05.737 "bdev_malloc_delete", 00:05:05.737 "bdev_malloc_create", 00:05:05.737 "bdev_null_resize", 00:05:05.737 "bdev_null_delete", 00:05:05.737 "bdev_null_create", 00:05:05.737 "bdev_nvme_cuse_unregister", 00:05:05.737 "bdev_nvme_cuse_register", 00:05:05.737 "bdev_opal_new_user", 00:05:05.737 "bdev_opal_set_lock_state", 00:05:05.737 "bdev_opal_delete", 00:05:05.737 "bdev_opal_get_info", 00:05:05.737 "bdev_opal_create", 00:05:05.737 "bdev_nvme_opal_revert", 00:05:05.737 "bdev_nvme_opal_init", 00:05:05.737 "bdev_nvme_send_cmd", 00:05:05.737 "bdev_nvme_set_keys", 00:05:05.737 "bdev_nvme_get_path_iostat", 00:05:05.737 "bdev_nvme_get_mdns_discovery_info", 00:05:05.737 "bdev_nvme_stop_mdns_discovery", 00:05:05.737 "bdev_nvme_start_mdns_discovery", 00:05:05.737 "bdev_nvme_set_multipath_policy", 00:05:05.737 "bdev_nvme_set_preferred_path", 00:05:05.737 "bdev_nvme_get_io_paths", 00:05:05.737 "bdev_nvme_remove_error_injection", 00:05:05.737 "bdev_nvme_add_error_injection", 00:05:05.737 "bdev_nvme_get_discovery_info", 00:05:05.737 "bdev_nvme_stop_discovery", 00:05:05.737 "bdev_nvme_start_discovery", 00:05:05.737 "bdev_nvme_get_controller_health_info", 00:05:05.737 "bdev_nvme_disable_controller", 00:05:05.737 "bdev_nvme_enable_controller", 00:05:05.737 "bdev_nvme_reset_controller", 00:05:05.737 "bdev_nvme_get_transport_statistics", 00:05:05.737 "bdev_nvme_apply_firmware", 00:05:05.737 "bdev_nvme_detach_controller", 00:05:05.737 "bdev_nvme_get_controllers", 00:05:05.737 "bdev_nvme_attach_controller", 00:05:05.737 "bdev_nvme_set_hotplug", 00:05:05.737 "bdev_nvme_set_options", 00:05:05.737 "bdev_passthru_delete", 00:05:05.737 "bdev_passthru_create", 00:05:05.737 "bdev_lvol_set_parent_bdev", 00:05:05.737 "bdev_lvol_set_parent", 00:05:05.737 "bdev_lvol_check_shallow_copy", 00:05:05.737 "bdev_lvol_start_shallow_copy", 00:05:05.737 "bdev_lvol_grow_lvstore", 00:05:05.737 "bdev_lvol_get_lvols", 00:05:05.737 "bdev_lvol_get_lvstores", 00:05:05.737 "bdev_lvol_delete", 00:05:05.737 "bdev_lvol_set_read_only", 00:05:05.737 "bdev_lvol_resize", 00:05:05.737 "bdev_lvol_decouple_parent", 00:05:05.737 "bdev_lvol_inflate", 00:05:05.737 "bdev_lvol_rename", 00:05:05.737 "bdev_lvol_clone_bdev", 00:05:05.737 "bdev_lvol_clone", 00:05:05.737 "bdev_lvol_snapshot", 00:05:05.737 "bdev_lvol_create", 00:05:05.737 "bdev_lvol_delete_lvstore", 00:05:05.737 "bdev_lvol_rename_lvstore", 00:05:05.737 "bdev_lvol_create_lvstore", 00:05:05.737 "bdev_raid_set_options", 00:05:05.737 "bdev_raid_remove_base_bdev", 00:05:05.737 "bdev_raid_add_base_bdev", 00:05:05.737 "bdev_raid_delete", 00:05:05.737 "bdev_raid_create", 00:05:05.737 "bdev_raid_get_bdevs", 00:05:05.737 "bdev_error_inject_error", 00:05:05.737 "bdev_error_delete", 00:05:05.737 "bdev_error_create", 00:05:05.737 "bdev_split_delete", 00:05:05.737 "bdev_split_create", 00:05:05.737 "bdev_delay_delete", 00:05:05.737 "bdev_delay_create", 00:05:05.737 "bdev_delay_update_latency", 00:05:05.737 "bdev_zone_block_delete", 00:05:05.737 "bdev_zone_block_create", 00:05:05.737 "blobfs_create", 00:05:05.737 "blobfs_detect", 00:05:05.737 "blobfs_set_cache_size", 00:05:05.737 "bdev_aio_delete", 00:05:05.737 "bdev_aio_rescan", 00:05:05.737 "bdev_aio_create", 00:05:05.737 "bdev_ftl_set_property", 00:05:05.737 "bdev_ftl_get_properties", 00:05:05.737 "bdev_ftl_get_stats", 00:05:05.737 "bdev_ftl_unmap", 00:05:05.737 "bdev_ftl_unload", 00:05:05.737 "bdev_ftl_delete", 00:05:05.737 "bdev_ftl_load", 00:05:05.737 "bdev_ftl_create", 00:05:05.737 "bdev_virtio_attach_controller", 00:05:05.737 "bdev_virtio_scsi_get_devices", 00:05:05.737 "bdev_virtio_detach_controller", 00:05:05.737 "bdev_virtio_blk_set_hotplug", 00:05:05.737 "bdev_iscsi_delete", 00:05:05.737 "bdev_iscsi_create", 00:05:05.737 "bdev_iscsi_set_options", 00:05:05.737 "accel_error_inject_error", 00:05:05.737 "ioat_scan_accel_module", 00:05:05.737 "dsa_scan_accel_module", 00:05:05.737 "iaa_scan_accel_module", 00:05:05.737 "keyring_file_remove_key", 00:05:05.737 "keyring_file_add_key", 00:05:05.737 "keyring_linux_set_options", 00:05:05.737 "fsdev_aio_delete", 00:05:05.737 "fsdev_aio_create", 00:05:05.737 "iscsi_get_histogram", 00:05:05.737 "iscsi_enable_histogram", 00:05:05.737 "iscsi_set_options", 00:05:05.737 "iscsi_get_auth_groups", 00:05:05.737 "iscsi_auth_group_remove_secret", 00:05:05.737 "iscsi_auth_group_add_secret", 00:05:05.737 "iscsi_delete_auth_group", 00:05:05.737 "iscsi_create_auth_group", 00:05:05.737 "iscsi_set_discovery_auth", 00:05:05.737 "iscsi_get_options", 00:05:05.737 "iscsi_target_node_request_logout", 00:05:05.737 "iscsi_target_node_set_redirect", 00:05:05.737 "iscsi_target_node_set_auth", 00:05:05.737 "iscsi_target_node_add_lun", 00:05:05.737 "iscsi_get_stats", 00:05:05.737 "iscsi_get_connections", 00:05:05.737 "iscsi_portal_group_set_auth", 00:05:05.737 "iscsi_start_portal_group", 00:05:05.737 "iscsi_delete_portal_group", 00:05:05.737 "iscsi_create_portal_group", 00:05:05.737 "iscsi_get_portal_groups", 00:05:05.737 "iscsi_delete_target_node", 00:05:05.737 "iscsi_target_node_remove_pg_ig_maps", 00:05:05.737 "iscsi_target_node_add_pg_ig_maps", 00:05:05.737 "iscsi_create_target_node", 00:05:05.737 "iscsi_get_target_nodes", 00:05:05.737 "iscsi_delete_initiator_group", 00:05:05.737 "iscsi_initiator_group_remove_initiators", 00:05:05.737 "iscsi_initiator_group_add_initiators", 00:05:05.737 "iscsi_create_initiator_group", 00:05:05.737 "iscsi_get_initiator_groups", 00:05:05.737 "nvmf_set_crdt", 00:05:05.737 "nvmf_set_config", 00:05:05.737 "nvmf_set_max_subsystems", 00:05:05.737 "nvmf_stop_mdns_prr", 00:05:05.737 "nvmf_publish_mdns_prr", 00:05:05.737 "nvmf_subsystem_get_listeners", 00:05:05.737 "nvmf_subsystem_get_qpairs", 00:05:05.737 "nvmf_subsystem_get_controllers", 00:05:05.737 "nvmf_get_stats", 00:05:05.737 "nvmf_get_transports", 00:05:05.737 "nvmf_create_transport", 00:05:05.737 "nvmf_get_targets", 00:05:05.737 "nvmf_delete_target", 00:05:05.737 "nvmf_create_target", 00:05:05.737 "nvmf_subsystem_allow_any_host", 00:05:05.737 "nvmf_subsystem_set_keys", 00:05:05.737 "nvmf_subsystem_remove_host", 00:05:05.737 "nvmf_subsystem_add_host", 00:05:05.737 "nvmf_ns_remove_host", 00:05:05.737 "nvmf_ns_add_host", 00:05:05.737 "nvmf_subsystem_remove_ns", 00:05:05.737 "nvmf_subsystem_set_ns_ana_group", 00:05:05.737 "nvmf_subsystem_add_ns", 00:05:05.737 "nvmf_subsystem_listener_set_ana_state", 00:05:05.737 "nvmf_discovery_get_referrals", 00:05:05.737 "nvmf_discovery_remove_referral", 00:05:05.737 "nvmf_discovery_add_referral", 00:05:05.737 "nvmf_subsystem_remove_listener", 00:05:05.737 "nvmf_subsystem_add_listener", 00:05:05.737 "nvmf_delete_subsystem", 00:05:05.737 "nvmf_create_subsystem", 00:05:05.737 "nvmf_get_subsystems", 00:05:05.737 "env_dpdk_get_mem_stats", 00:05:05.737 "nbd_get_disks", 00:05:05.737 "nbd_stop_disk", 00:05:05.737 "nbd_start_disk", 00:05:05.737 "ublk_recover_disk", 00:05:05.737 "ublk_get_disks", 00:05:05.737 "ublk_stop_disk", 00:05:05.737 "ublk_start_disk", 00:05:05.737 "ublk_destroy_target", 00:05:05.737 "ublk_create_target", 00:05:05.737 "virtio_blk_create_transport", 00:05:05.737 "virtio_blk_get_transports", 00:05:05.737 "vhost_controller_set_coalescing", 00:05:05.737 "vhost_get_controllers", 00:05:05.737 "vhost_delete_controller", 00:05:05.737 "vhost_create_blk_controller", 00:05:05.737 "vhost_scsi_controller_remove_target", 00:05:05.737 "vhost_scsi_controller_add_target", 00:05:05.737 "vhost_start_scsi_controller", 00:05:05.737 "vhost_create_scsi_controller", 00:05:05.737 "thread_set_cpumask", 00:05:05.737 "scheduler_set_options", 00:05:05.737 "framework_get_governor", 00:05:05.737 "framework_get_scheduler", 00:05:05.737 "framework_set_scheduler", 00:05:05.737 "framework_get_reactors", 00:05:05.737 "thread_get_io_channels", 00:05:05.737 "thread_get_pollers", 00:05:05.737 "thread_get_stats", 00:05:05.737 "framework_monitor_context_switch", 00:05:05.737 "spdk_kill_instance", 00:05:05.737 "log_enable_timestamps", 00:05:05.737 "log_get_flags", 00:05:05.738 "log_clear_flag", 00:05:05.738 "log_set_flag", 00:05:05.738 "log_get_level", 00:05:05.738 "log_set_level", 00:05:05.738 "log_get_print_level", 00:05:05.738 "log_set_print_level", 00:05:05.738 "framework_enable_cpumask_locks", 00:05:05.738 "framework_disable_cpumask_locks", 00:05:05.738 "framework_wait_init", 00:05:05.738 "framework_start_init", 00:05:05.738 "scsi_get_devices", 00:05:05.738 "bdev_get_histogram", 00:05:05.738 "bdev_enable_histogram", 00:05:05.738 "bdev_set_qos_limit", 00:05:05.738 "bdev_set_qd_sampling_period", 00:05:05.738 "bdev_get_bdevs", 00:05:05.738 "bdev_reset_iostat", 00:05:05.738 "bdev_get_iostat", 00:05:05.738 "bdev_examine", 00:05:05.738 "bdev_wait_for_examine", 00:05:05.738 "bdev_set_options", 00:05:05.738 "accel_get_stats", 00:05:05.738 "accel_set_options", 00:05:05.738 "accel_set_driver", 00:05:05.738 "accel_crypto_key_destroy", 00:05:05.738 "accel_crypto_keys_get", 00:05:05.738 "accel_crypto_key_create", 00:05:05.738 "accel_assign_opc", 00:05:05.738 "accel_get_module_info", 00:05:05.738 "accel_get_opc_assignments", 00:05:05.738 "vmd_rescan", 00:05:05.738 "vmd_remove_device", 00:05:05.738 "vmd_enable", 00:05:05.738 "sock_get_default_impl", 00:05:05.738 "sock_set_default_impl", 00:05:05.738 "sock_impl_set_options", 00:05:05.738 "sock_impl_get_options", 00:05:05.738 "iobuf_get_stats", 00:05:05.738 "iobuf_set_options", 00:05:05.738 "keyring_get_keys", 00:05:05.738 "framework_get_pci_devices", 00:05:05.738 "framework_get_config", 00:05:05.738 "framework_get_subsystems", 00:05:05.738 "fsdev_set_opts", 00:05:05.738 "fsdev_get_opts", 00:05:05.738 "trace_get_info", 00:05:05.738 "trace_get_tpoint_group_mask", 00:05:05.738 "trace_disable_tpoint_group", 00:05:05.738 "trace_enable_tpoint_group", 00:05:05.738 "trace_clear_tpoint_mask", 00:05:05.738 "trace_set_tpoint_mask", 00:05:05.738 "notify_get_notifications", 00:05:05.738 "notify_get_types", 00:05:05.738 "spdk_get_version", 00:05:05.738 "rpc_get_methods" 00:05:05.738 ] 00:05:05.738 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.738 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:05.738 09:17:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57971 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57971 ']' 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57971 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57971 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.738 killing process with pid 57971 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57971' 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57971 00:05:05.738 09:17:31 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57971 00:05:08.264 00:05:08.264 real 0m4.296s 00:05:08.264 user 0m7.731s 00:05:08.264 sys 0m0.605s 00:05:08.264 09:17:33 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.264 09:17:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.264 ************************************ 00:05:08.264 END TEST spdkcli_tcp 00:05:08.264 ************************************ 00:05:08.264 09:17:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:08.264 09:17:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.264 09:17:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.264 09:17:33 -- common/autotest_common.sh@10 -- # set +x 00:05:08.264 ************************************ 00:05:08.264 START TEST dpdk_mem_utility 00:05:08.264 ************************************ 00:05:08.265 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:08.522 * Looking for test storage... 00:05:08.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:08.522 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.522 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.522 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.522 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.522 09:17:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:08.522 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.522 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.522 --rc genhtml_branch_coverage=1 00:05:08.522 --rc genhtml_function_coverage=1 00:05:08.522 --rc genhtml_legend=1 00:05:08.522 --rc geninfo_all_blocks=1 00:05:08.522 --rc geninfo_unexecuted_blocks=1 00:05:08.522 00:05:08.522 ' 00:05:08.522 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.522 --rc genhtml_branch_coverage=1 00:05:08.522 --rc genhtml_function_coverage=1 00:05:08.523 --rc genhtml_legend=1 00:05:08.523 --rc geninfo_all_blocks=1 00:05:08.523 --rc geninfo_unexecuted_blocks=1 00:05:08.523 00:05:08.523 ' 00:05:08.523 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.523 --rc genhtml_branch_coverage=1 00:05:08.523 --rc genhtml_function_coverage=1 00:05:08.523 --rc genhtml_legend=1 00:05:08.523 --rc geninfo_all_blocks=1 00:05:08.523 --rc geninfo_unexecuted_blocks=1 00:05:08.523 00:05:08.523 ' 00:05:08.523 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.523 --rc genhtml_branch_coverage=1 00:05:08.523 --rc genhtml_function_coverage=1 00:05:08.523 --rc genhtml_legend=1 00:05:08.523 --rc geninfo_all_blocks=1 00:05:08.523 --rc geninfo_unexecuted_blocks=1 00:05:08.523 00:05:08.523 ' 00:05:08.523 09:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:08.523 09:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.523 09:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58093 00:05:08.523 09:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58093 00:05:08.523 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58093 ']' 00:05:08.523 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.523 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.523 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.523 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.523 09:17:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.781 [2024-11-20 09:17:34.045525] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:08.781 [2024-11-20 09:17:34.045750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58093 ] 00:05:08.781 [2024-11-20 09:17:34.223485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.039 [2024-11-20 09:17:34.340036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.977 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.977 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:09.977 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:09.977 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:09.977 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.977 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:09.977 { 00:05:09.977 "filename": "/tmp/spdk_mem_dump.txt" 00:05:09.977 } 00:05:09.977 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.977 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:09.977 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:09.977 1 heaps totaling size 816.000000 MiB 00:05:09.977 size: 816.000000 MiB heap id: 0 00:05:09.977 end heaps---------- 00:05:09.977 9 mempools totaling size 595.772034 MiB 00:05:09.977 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:09.977 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:09.977 size: 92.545471 MiB name: bdev_io_58093 00:05:09.977 size: 50.003479 MiB name: msgpool_58093 00:05:09.977 size: 36.509338 MiB name: fsdev_io_58093 00:05:09.977 size: 21.763794 MiB name: PDU_Pool 00:05:09.977 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:09.977 size: 4.133484 MiB name: evtpool_58093 00:05:09.977 size: 0.026123 MiB name: Session_Pool 00:05:09.977 end mempools------- 00:05:09.977 6 memzones totaling size 4.142822 MiB 00:05:09.977 size: 1.000366 MiB name: RG_ring_0_58093 00:05:09.977 size: 1.000366 MiB name: RG_ring_1_58093 00:05:09.977 size: 1.000366 MiB name: RG_ring_4_58093 00:05:09.977 size: 1.000366 MiB name: RG_ring_5_58093 00:05:09.977 size: 0.125366 MiB name: RG_ring_2_58093 00:05:09.977 size: 0.015991 MiB name: RG_ring_3_58093 00:05:09.977 end memzones------- 00:05:09.977 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:09.977 heap id: 0 total size: 816.000000 MiB number of busy elements: 319 number of free elements: 18 00:05:09.977 list of free elements. size: 16.790405 MiB 00:05:09.977 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:09.977 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:09.977 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:09.977 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:09.977 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:09.977 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:09.977 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:09.977 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:09.977 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:09.977 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:09.977 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:09.977 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:05:09.977 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:09.977 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:09.977 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:09.977 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:09.977 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:09.977 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:09.977 list of standard malloc elements. size: 199.288696 MiB 00:05:09.977 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:09.977 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:09.977 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:09.977 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:09.977 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:09.977 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:09.977 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:09.977 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:09.977 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:09.977 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:09.977 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:09.977 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:09.977 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:09.977 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:09.977 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:09.978 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:09.978 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:09.978 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:09.978 list of memzone associated elements. size: 599.920898 MiB 00:05:09.978 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:09.978 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:09.978 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:09.978 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:09.978 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:09.978 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58093_0 00:05:09.978 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:09.978 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58093_0 00:05:09.978 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:09.978 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58093_0 00:05:09.978 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:09.978 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:09.978 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:09.978 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:09.978 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:09.978 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58093_0 00:05:09.978 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:09.978 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58093 00:05:09.978 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:09.978 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58093 00:05:09.978 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:09.978 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:09.978 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:09.978 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:09.978 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:09.978 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:09.978 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:09.978 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:09.978 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:09.978 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58093 00:05:09.978 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:09.979 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58093 00:05:09.979 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:09.979 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58093 00:05:09.979 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:09.979 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58093 00:05:09.979 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:09.979 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58093 00:05:09.979 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:09.979 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58093 00:05:09.979 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:09.979 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:09.979 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:09.979 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:09.979 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:09.979 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:09.979 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:09.979 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58093 00:05:09.979 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:09.979 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58093 00:05:09.979 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:09.979 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:09.979 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:09.979 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:09.979 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:09.979 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58093 00:05:09.979 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:09.979 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:09.979 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:09.979 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58093 00:05:09.979 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:09.979 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58093 00:05:09.979 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:09.979 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58093 00:05:09.979 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:09.979 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:09.979 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:09.979 09:17:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58093 00:05:09.979 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58093 ']' 00:05:09.979 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58093 00:05:09.979 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:10.236 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.236 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58093 00:05:10.236 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.236 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.236 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58093' 00:05:10.236 killing process with pid 58093 00:05:10.236 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58093 00:05:10.236 09:17:35 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58093 00:05:12.765 00:05:12.765 real 0m4.372s 00:05:12.765 user 0m4.290s 00:05:12.765 sys 0m0.596s 00:05:12.765 09:17:38 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.765 09:17:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.765 ************************************ 00:05:12.765 END TEST dpdk_mem_utility 00:05:12.765 ************************************ 00:05:12.765 09:17:38 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:12.765 09:17:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.765 09:17:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.765 09:17:38 -- common/autotest_common.sh@10 -- # set +x 00:05:12.765 ************************************ 00:05:12.765 START TEST event 00:05:12.765 ************************************ 00:05:12.765 09:17:38 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:13.025 * Looking for test storage... 00:05:13.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:13.025 09:17:38 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.025 09:17:38 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.025 09:17:38 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.025 09:17:38 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.025 09:17:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.025 09:17:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.025 09:17:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.025 09:17:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.025 09:17:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.025 09:17:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.025 09:17:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.025 09:17:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.025 09:17:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.025 09:17:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.025 09:17:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.025 09:17:38 event -- scripts/common.sh@344 -- # case "$op" in 00:05:13.025 09:17:38 event -- scripts/common.sh@345 -- # : 1 00:05:13.025 09:17:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.025 09:17:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.025 09:17:38 event -- scripts/common.sh@365 -- # decimal 1 00:05:13.025 09:17:38 event -- scripts/common.sh@353 -- # local d=1 00:05:13.025 09:17:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.025 09:17:38 event -- scripts/common.sh@355 -- # echo 1 00:05:13.025 09:17:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.025 09:17:38 event -- scripts/common.sh@366 -- # decimal 2 00:05:13.025 09:17:38 event -- scripts/common.sh@353 -- # local d=2 00:05:13.025 09:17:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.025 09:17:38 event -- scripts/common.sh@355 -- # echo 2 00:05:13.025 09:17:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.025 09:17:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.025 09:17:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.025 09:17:38 event -- scripts/common.sh@368 -- # return 0 00:05:13.025 09:17:38 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.025 09:17:38 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.025 --rc genhtml_branch_coverage=1 00:05:13.025 --rc genhtml_function_coverage=1 00:05:13.025 --rc genhtml_legend=1 00:05:13.025 --rc geninfo_all_blocks=1 00:05:13.025 --rc geninfo_unexecuted_blocks=1 00:05:13.025 00:05:13.025 ' 00:05:13.025 09:17:38 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.025 --rc genhtml_branch_coverage=1 00:05:13.025 --rc genhtml_function_coverage=1 00:05:13.025 --rc genhtml_legend=1 00:05:13.025 --rc geninfo_all_blocks=1 00:05:13.025 --rc geninfo_unexecuted_blocks=1 00:05:13.025 00:05:13.025 ' 00:05:13.025 09:17:38 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.025 --rc genhtml_branch_coverage=1 00:05:13.025 --rc genhtml_function_coverage=1 00:05:13.025 --rc genhtml_legend=1 00:05:13.025 --rc geninfo_all_blocks=1 00:05:13.025 --rc geninfo_unexecuted_blocks=1 00:05:13.025 00:05:13.025 ' 00:05:13.025 09:17:38 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.026 --rc genhtml_branch_coverage=1 00:05:13.026 --rc genhtml_function_coverage=1 00:05:13.026 --rc genhtml_legend=1 00:05:13.026 --rc geninfo_all_blocks=1 00:05:13.026 --rc geninfo_unexecuted_blocks=1 00:05:13.026 00:05:13.026 ' 00:05:13.026 09:17:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:13.026 09:17:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:13.026 09:17:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.026 09:17:38 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:13.026 09:17:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.026 09:17:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.026 ************************************ 00:05:13.026 START TEST event_perf 00:05:13.026 ************************************ 00:05:13.026 09:17:38 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.026 Running I/O for 1 seconds...[2024-11-20 09:17:38.412345] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:13.026 [2024-11-20 09:17:38.412544] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58208 ] 00:05:13.284 [2024-11-20 09:17:38.591692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.284 [2024-11-20 09:17:38.720895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.284 [2024-11-20 09:17:38.721079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.284 [2024-11-20 09:17:38.721228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.284 [2024-11-20 09:17:38.721263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.667 Running I/O for 1 seconds... 00:05:14.667 lcore 0: 190840 00:05:14.667 lcore 1: 190839 00:05:14.667 lcore 2: 190840 00:05:14.667 lcore 3: 190839 00:05:14.667 done. 00:05:14.667 00:05:14.667 real 0m1.618s 00:05:14.667 user 0m4.378s 00:05:14.667 sys 0m0.116s 00:05:14.667 ************************************ 00:05:14.667 END TEST event_perf 00:05:14.667 ************************************ 00:05:14.667 09:17:39 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.667 09:17:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.667 09:17:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:14.667 09:17:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:14.667 09:17:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.667 09:17:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.667 ************************************ 00:05:14.667 START TEST event_reactor 00:05:14.667 ************************************ 00:05:14.667 09:17:40 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:14.667 [2024-11-20 09:17:40.093048] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:14.667 [2024-11-20 09:17:40.093203] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:05:14.926 [2024-11-20 09:17:40.272094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.185 [2024-11-20 09:17:40.398371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.562 test_start 00:05:16.562 oneshot 00:05:16.562 tick 100 00:05:16.562 tick 100 00:05:16.562 tick 250 00:05:16.562 tick 100 00:05:16.562 tick 100 00:05:16.562 tick 100 00:05:16.562 tick 250 00:05:16.562 tick 500 00:05:16.562 tick 100 00:05:16.562 tick 100 00:05:16.562 tick 250 00:05:16.562 tick 100 00:05:16.562 tick 100 00:05:16.562 test_end 00:05:16.562 00:05:16.562 real 0m1.603s 00:05:16.562 user 0m1.394s 00:05:16.562 sys 0m0.100s 00:05:16.562 ************************************ 00:05:16.562 END TEST event_reactor 00:05:16.562 ************************************ 00:05:16.562 09:17:41 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.562 09:17:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:16.562 09:17:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.562 09:17:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:16.562 09:17:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.562 09:17:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.562 ************************************ 00:05:16.562 START TEST event_reactor_perf 00:05:16.562 ************************************ 00:05:16.562 09:17:41 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.562 [2024-11-20 09:17:41.757510] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:16.562 [2024-11-20 09:17:41.757688] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58284 ] 00:05:16.562 [2024-11-20 09:17:41.921918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.821 [2024-11-20 09:17:42.042336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.200 test_start 00:05:18.200 test_end 00:05:18.200 Performance: 338682 events per second 00:05:18.200 00:05:18.200 real 0m1.582s 00:05:18.200 user 0m1.385s 00:05:18.200 sys 0m0.088s 00:05:18.200 09:17:43 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.200 09:17:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.200 ************************************ 00:05:18.200 END TEST event_reactor_perf 00:05:18.200 ************************************ 00:05:18.200 09:17:43 event -- event/event.sh@49 -- # uname -s 00:05:18.200 09:17:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.200 09:17:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.200 09:17:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.200 09:17:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.200 09:17:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.200 ************************************ 00:05:18.200 START TEST event_scheduler 00:05:18.200 ************************************ 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.200 * Looking for test storage... 00:05:18.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.200 09:17:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.200 --rc genhtml_branch_coverage=1 00:05:18.200 --rc genhtml_function_coverage=1 00:05:18.200 --rc genhtml_legend=1 00:05:18.200 --rc geninfo_all_blocks=1 00:05:18.200 --rc geninfo_unexecuted_blocks=1 00:05:18.200 00:05:18.200 ' 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.200 --rc genhtml_branch_coverage=1 00:05:18.200 --rc genhtml_function_coverage=1 00:05:18.200 --rc genhtml_legend=1 00:05:18.200 --rc geninfo_all_blocks=1 00:05:18.200 --rc geninfo_unexecuted_blocks=1 00:05:18.200 00:05:18.200 ' 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.200 --rc genhtml_branch_coverage=1 00:05:18.200 --rc genhtml_function_coverage=1 00:05:18.200 --rc genhtml_legend=1 00:05:18.200 --rc geninfo_all_blocks=1 00:05:18.200 --rc geninfo_unexecuted_blocks=1 00:05:18.200 00:05:18.200 ' 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.200 --rc genhtml_branch_coverage=1 00:05:18.200 --rc genhtml_function_coverage=1 00:05:18.200 --rc genhtml_legend=1 00:05:18.200 --rc geninfo_all_blocks=1 00:05:18.200 --rc geninfo_unexecuted_blocks=1 00:05:18.200 00:05:18.200 ' 00:05:18.200 09:17:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:18.200 09:17:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58360 00:05:18.200 09:17:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:18.200 09:17:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.200 09:17:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58360 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58360 ']' 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.200 09:17:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.458 [2024-11-20 09:17:43.678399] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:18.458 [2024-11-20 09:17:43.678544] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58360 ] 00:05:18.458 [2024-11-20 09:17:43.855461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.717 [2024-11-20 09:17:43.979815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.717 [2024-11-20 09:17:43.979989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.717 [2024-11-20 09:17:43.980011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.717 [2024-11-20 09:17:43.980056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.286 09:17:44 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.286 09:17:44 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:19.286 09:17:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:19.286 09:17:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.286 09:17:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.286 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.286 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.286 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.286 POWER: Cannot set governor of lcore 0 to performance 00:05:19.286 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.286 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.286 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.286 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.286 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:19.286 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:19.286 POWER: Unable to set Power Management Environment for lcore 0 00:05:19.286 [2024-11-20 09:17:44.568893] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:19.286 [2024-11-20 09:17:44.568924] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:19.286 [2024-11-20 09:17:44.568941] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:19.286 [2024-11-20 09:17:44.568970] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:19.286 [2024-11-20 09:17:44.568984] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:19.286 [2024-11-20 09:17:44.569000] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:19.286 09:17:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.286 09:17:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:19.286 09:17:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.286 09:17:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 [2024-11-20 09:17:44.912224] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:19.545 09:17:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.545 09:17:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:19.545 09:17:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.545 09:17:44 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 ************************************ 00:05:19.545 START TEST scheduler_create_thread 00:05:19.545 ************************************ 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 2 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 3 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 4 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 5 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 6 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 7 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 8 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.545 9 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.545 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.805 10 00:05:19.805 09:17:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.806 09:17:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.744 ************************************ 00:05:20.744 END TEST scheduler_create_thread 00:05:20.744 ************************************ 00:05:20.744 09:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.744 00:05:20.744 real 0m1.177s 00:05:20.744 user 0m0.011s 00:05:20.744 sys 0m0.009s 00:05:20.744 09:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.744 09:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.744 09:17:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:20.744 09:17:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58360 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58360 ']' 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58360 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58360 00:05:20.744 killing process with pid 58360 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58360' 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58360 00:05:20.744 09:17:46 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58360 00:05:21.314 [2024-11-20 09:17:46.577483] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.693 ************************************ 00:05:22.693 END TEST event_scheduler 00:05:22.693 ************************************ 00:05:22.693 00:05:22.693 real 0m4.598s 00:05:22.693 user 0m7.887s 00:05:22.693 sys 0m0.504s 00:05:22.693 09:17:47 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.693 09:17:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.693 09:17:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.693 09:17:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.693 09:17:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.693 09:17:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.693 09:17:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.693 ************************************ 00:05:22.693 START TEST app_repeat 00:05:22.693 ************************************ 00:05:22.693 09:17:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.693 Process app_repeat pid: 58455 00:05:22.693 spdk_app_start Round 0 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58455 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58455' 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.693 09:17:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58455 /var/tmp/spdk-nbd.sock 00:05:22.693 09:17:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58455 ']' 00:05:22.693 09:17:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.693 09:17:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.693 09:17:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.693 09:17:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.693 09:17:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.693 [2024-11-20 09:17:48.074117] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:22.693 [2024-11-20 09:17:48.074247] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58455 ] 00:05:22.953 [2024-11-20 09:17:48.251151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.953 [2024-11-20 09:17:48.387728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.953 [2024-11-20 09:17:48.387769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.893 09:17:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.893 09:17:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:23.893 09:17:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.893 Malloc0 00:05:23.893 09:17:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.152 Malloc1 00:05:24.411 09:17:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.411 09:17:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.672 /dev/nbd0 00:05:24.672 09:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.673 09:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.673 1+0 records in 00:05:24.673 1+0 records out 00:05:24.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312483 s, 13.1 MB/s 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:24.673 09:17:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:24.673 09:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.673 09:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.673 09:17:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.932 /dev/nbd1 00:05:24.932 09:17:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.932 09:17:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.932 09:17:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:24.932 09:17:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:24.932 09:17:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:24.932 09:17:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:24.932 09:17:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:24.932 09:17:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:24.932 09:17:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:24.932 09:17:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:24.933 09:17:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.933 1+0 records in 00:05:24.933 1+0 records out 00:05:24.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281755 s, 14.5 MB/s 00:05:24.933 09:17:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.933 09:17:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:24.933 09:17:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.933 09:17:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:24.933 09:17:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:24.933 09:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.933 09:17:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.933 09:17:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.933 09:17:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.933 09:17:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.193 { 00:05:25.193 "nbd_device": "/dev/nbd0", 00:05:25.193 "bdev_name": "Malloc0" 00:05:25.193 }, 00:05:25.193 { 00:05:25.193 "nbd_device": "/dev/nbd1", 00:05:25.193 "bdev_name": "Malloc1" 00:05:25.193 } 00:05:25.193 ]' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.193 { 00:05:25.193 "nbd_device": "/dev/nbd0", 00:05:25.193 "bdev_name": "Malloc0" 00:05:25.193 }, 00:05:25.193 { 00:05:25.193 "nbd_device": "/dev/nbd1", 00:05:25.193 "bdev_name": "Malloc1" 00:05:25.193 } 00:05:25.193 ]' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.193 /dev/nbd1' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.193 /dev/nbd1' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.193 256+0 records in 00:05:25.193 256+0 records out 00:05:25.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014732 s, 71.2 MB/s 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.193 256+0 records in 00:05:25.193 256+0 records out 00:05:25.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236349 s, 44.4 MB/s 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.193 256+0 records in 00:05:25.193 256+0 records out 00:05:25.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284878 s, 36.8 MB/s 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.193 09:17:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.453 09:17:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.711 09:17:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.968 09:17:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.968 09:17:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.560 09:17:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.936 [2024-11-20 09:17:52.988839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.936 [2024-11-20 09:17:53.101103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.936 [2024-11-20 09:17:53.101105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.936 [2024-11-20 09:17:53.289010] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.936 [2024-11-20 09:17:53.289106] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.887 spdk_app_start Round 1 00:05:29.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.887 09:17:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.887 09:17:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:29.887 09:17:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58455 /var/tmp/spdk-nbd.sock 00:05:29.887 09:17:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58455 ']' 00:05:29.887 09:17:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.887 09:17:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.887 09:17:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.887 09:17:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.887 09:17:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.887 09:17:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.887 09:17:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:29.887 09:17:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.887 Malloc0 00:05:30.146 09:17:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.404 Malloc1 00:05:30.404 09:17:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.404 09:17:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.662 /dev/nbd0 00:05:30.662 09:17:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.662 09:17:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.662 1+0 records in 00:05:30.662 1+0 records out 00:05:30.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274473 s, 14.9 MB/s 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.662 09:17:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.662 09:17:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.662 09:17:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.662 09:17:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.920 /dev/nbd1 00:05:30.921 09:17:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.921 09:17:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.921 1+0 records in 00:05:30.921 1+0 records out 00:05:30.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239699 s, 17.1 MB/s 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.921 09:17:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.921 09:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.921 09:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.921 09:17:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.921 09:17:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.921 09:17:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.180 { 00:05:31.180 "nbd_device": "/dev/nbd0", 00:05:31.180 "bdev_name": "Malloc0" 00:05:31.180 }, 00:05:31.180 { 00:05:31.180 "nbd_device": "/dev/nbd1", 00:05:31.180 "bdev_name": "Malloc1" 00:05:31.180 } 00:05:31.180 ]' 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.180 { 00:05:31.180 "nbd_device": "/dev/nbd0", 00:05:31.180 "bdev_name": "Malloc0" 00:05:31.180 }, 00:05:31.180 { 00:05:31.180 "nbd_device": "/dev/nbd1", 00:05:31.180 "bdev_name": "Malloc1" 00:05:31.180 } 00:05:31.180 ]' 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.180 /dev/nbd1' 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.180 /dev/nbd1' 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.180 256+0 records in 00:05:31.180 256+0 records out 00:05:31.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133009 s, 78.8 MB/s 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.180 256+0 records in 00:05:31.180 256+0 records out 00:05:31.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214057 s, 49.0 MB/s 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.180 256+0 records in 00:05:31.180 256+0 records out 00:05:31.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267601 s, 39.2 MB/s 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.180 09:17:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.181 09:17:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.440 09:17:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.709 09:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.968 09:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.968 09:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.968 09:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.227 09:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.227 09:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.227 09:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.227 09:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.227 09:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.227 09:17:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.227 09:17:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.227 09:17:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.227 09:17:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.227 09:17:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.487 09:17:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.866 [2024-11-20 09:17:59.127059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.866 [2024-11-20 09:17:59.247735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.866 [2024-11-20 09:17:59.247755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.124 [2024-11-20 09:17:59.468441] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.124 [2024-11-20 09:17:59.468541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.504 spdk_app_start Round 2 00:05:35.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.504 09:18:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.504 09:18:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.504 09:18:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58455 /var/tmp/spdk-nbd.sock 00:05:35.504 09:18:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58455 ']' 00:05:35.504 09:18:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.504 09:18:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.504 09:18:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.504 09:18:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.504 09:18:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.764 09:18:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.764 09:18:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.764 09:18:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.023 Malloc0 00:05:36.283 09:18:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.542 Malloc1 00:05:36.542 09:18:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.542 09:18:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.801 /dev/nbd0 00:05:36.801 09:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.801 09:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.801 1+0 records in 00:05:36.801 1+0 records out 00:05:36.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281203 s, 14.6 MB/s 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.801 09:18:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.802 09:18:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.802 09:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.802 09:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.802 09:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.061 /dev/nbd1 00:05:37.061 09:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.061 09:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.061 1+0 records in 00:05:37.061 1+0 records out 00:05:37.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423527 s, 9.7 MB/s 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.061 09:18:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:37.061 09:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.061 09:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.061 09:18:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.061 09:18:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.061 09:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.320 { 00:05:37.320 "nbd_device": "/dev/nbd0", 00:05:37.320 "bdev_name": "Malloc0" 00:05:37.320 }, 00:05:37.320 { 00:05:37.320 "nbd_device": "/dev/nbd1", 00:05:37.320 "bdev_name": "Malloc1" 00:05:37.320 } 00:05:37.320 ]' 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.320 { 00:05:37.320 "nbd_device": "/dev/nbd0", 00:05:37.320 "bdev_name": "Malloc0" 00:05:37.320 }, 00:05:37.320 { 00:05:37.320 "nbd_device": "/dev/nbd1", 00:05:37.320 "bdev_name": "Malloc1" 00:05:37.320 } 00:05:37.320 ]' 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.320 /dev/nbd1' 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.320 /dev/nbd1' 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.320 256+0 records in 00:05:37.320 256+0 records out 00:05:37.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123972 s, 84.6 MB/s 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.320 256+0 records in 00:05:37.320 256+0 records out 00:05:37.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225169 s, 46.6 MB/s 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.320 256+0 records in 00:05:37.320 256+0 records out 00:05:37.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278967 s, 37.6 MB/s 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.320 09:18:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.321 09:18:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.321 09:18:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.321 09:18:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.321 09:18:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.321 09:18:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.321 09:18:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.321 09:18:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.580 09:18:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.580 09:18:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.580 09:18:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.580 09:18:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.580 09:18:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.580 09:18:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.580 09:18:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.580 09:18:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.839 09:18:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.098 09:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.357 09:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.357 09:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.357 09:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.358 09:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.358 09:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.358 09:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.358 09:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.358 09:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.358 09:18:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.358 09:18:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.358 09:18:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.358 09:18:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.358 09:18:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.925 09:18:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.343 [2024-11-20 09:18:05.485857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.343 [2024-11-20 09:18:05.617189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.343 [2024-11-20 09:18:05.617196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.602 [2024-11-20 09:18:05.837946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.602 [2024-11-20 09:18:05.838037] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.983 09:18:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58455 /var/tmp/spdk-nbd.sock 00:05:41.983 09:18:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58455 ']' 00:05:41.983 09:18:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.983 09:18:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.983 09:18:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.983 09:18:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.983 09:18:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:42.243 09:18:07 event.app_repeat -- event/event.sh@39 -- # killprocess 58455 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58455 ']' 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58455 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58455 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58455' 00:05:42.243 killing process with pid 58455 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58455 00:05:42.243 09:18:07 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58455 00:05:43.619 spdk_app_start is called in Round 0. 00:05:43.619 Shutdown signal received, stop current app iteration 00:05:43.619 Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 reinitialization... 00:05:43.619 spdk_app_start is called in Round 1. 00:05:43.619 Shutdown signal received, stop current app iteration 00:05:43.619 Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 reinitialization... 00:05:43.619 spdk_app_start is called in Round 2. 00:05:43.619 Shutdown signal received, stop current app iteration 00:05:43.619 Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 reinitialization... 00:05:43.619 spdk_app_start is called in Round 3. 00:05:43.619 Shutdown signal received, stop current app iteration 00:05:43.619 ************************************ 00:05:43.619 END TEST app_repeat 00:05:43.619 ************************************ 00:05:43.619 09:18:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:43.619 09:18:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:43.619 00:05:43.619 real 0m20.703s 00:05:43.619 user 0m44.825s 00:05:43.619 sys 0m2.994s 00:05:43.619 09:18:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.619 09:18:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.619 09:18:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:43.619 09:18:08 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.619 09:18:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.619 09:18:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.619 09:18:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.619 ************************************ 00:05:43.619 START TEST cpu_locks 00:05:43.619 ************************************ 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.619 * Looking for test storage... 00:05:43.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.619 09:18:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.619 --rc genhtml_branch_coverage=1 00:05:43.619 --rc genhtml_function_coverage=1 00:05:43.619 --rc genhtml_legend=1 00:05:43.619 --rc geninfo_all_blocks=1 00:05:43.619 --rc geninfo_unexecuted_blocks=1 00:05:43.619 00:05:43.619 ' 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.619 --rc genhtml_branch_coverage=1 00:05:43.619 --rc genhtml_function_coverage=1 00:05:43.619 --rc genhtml_legend=1 00:05:43.619 --rc geninfo_all_blocks=1 00:05:43.619 --rc geninfo_unexecuted_blocks=1 00:05:43.619 00:05:43.619 ' 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.619 --rc genhtml_branch_coverage=1 00:05:43.619 --rc genhtml_function_coverage=1 00:05:43.619 --rc genhtml_legend=1 00:05:43.619 --rc geninfo_all_blocks=1 00:05:43.619 --rc geninfo_unexecuted_blocks=1 00:05:43.619 00:05:43.619 ' 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.619 --rc genhtml_branch_coverage=1 00:05:43.619 --rc genhtml_function_coverage=1 00:05:43.619 --rc genhtml_legend=1 00:05:43.619 --rc geninfo_all_blocks=1 00:05:43.619 --rc geninfo_unexecuted_blocks=1 00:05:43.619 00:05:43.619 ' 00:05:43.619 09:18:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:43.619 09:18:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:43.619 09:18:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:43.619 09:18:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.619 09:18:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.619 ************************************ 00:05:43.619 START TEST default_locks 00:05:43.619 ************************************ 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58913 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58913 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58913 ']' 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.619 09:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.877 [2024-11-20 09:18:09.142047] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:43.877 [2024-11-20 09:18:09.142196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58913 ] 00:05:43.877 [2024-11-20 09:18:09.327559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.137 [2024-11-20 09:18:09.450055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.073 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.073 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:45.073 09:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58913 00:05:45.073 09:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58913 00:05:45.073 09:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58913 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58913 ']' 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58913 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58913 00:05:45.334 killing process with pid 58913 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58913' 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58913 00:05:45.334 09:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58913 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58913 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58913 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58913 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58913 ']' 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.624 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58913) - No such process 00:05:48.624 ERROR: process (pid: 58913) is no longer running 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.624 ************************************ 00:05:48.624 END TEST default_locks 00:05:48.624 ************************************ 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.624 00:05:48.624 real 0m4.620s 00:05:48.624 user 0m4.564s 00:05:48.624 sys 0m0.697s 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.624 09:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.624 09:18:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.624 09:18:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.624 09:18:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.624 09:18:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.624 ************************************ 00:05:48.624 START TEST default_locks_via_rpc 00:05:48.624 ************************************ 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58996 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58996 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58996 ']' 00:05:48.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.624 09:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.624 [2024-11-20 09:18:13.821244] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:48.624 [2024-11-20 09:18:13.821506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58996 ] 00:05:48.624 [2024-11-20 09:18:14.007830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.884 [2024-11-20 09:18:14.147667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58996 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58996 00:05:49.823 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58996 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58996 ']' 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58996 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58996 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58996' 00:05:50.083 killing process with pid 58996 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58996 00:05:50.083 09:18:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58996 00:05:53.374 00:05:53.374 real 0m4.596s 00:05:53.374 user 0m4.547s 00:05:53.374 sys 0m0.683s 00:05:53.374 09:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.374 ************************************ 00:05:53.374 END TEST default_locks_via_rpc 00:05:53.374 ************************************ 00:05:53.374 09:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.374 09:18:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:53.374 09:18:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.374 09:18:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.374 09:18:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.374 ************************************ 00:05:53.374 START TEST non_locking_app_on_locked_coremask 00:05:53.374 ************************************ 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59081 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59081 /var/tmp/spdk.sock 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59081 ']' 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.374 09:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.374 [2024-11-20 09:18:18.481451] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:53.374 [2024-11-20 09:18:18.481715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59081 ] 00:05:53.374 [2024-11-20 09:18:18.663782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.374 [2024-11-20 09:18:18.801956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59097 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59097 /var/tmp/spdk2.sock 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59097 ']' 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.761 09:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.761 [2024-11-20 09:18:19.874656] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:05:54.761 [2024-11-20 09:18:19.874880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59097 ] 00:05:54.761 [2024-11-20 09:18:20.056731] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.761 [2024-11-20 09:18:20.056799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.020 [2024-11-20 09:18:20.320514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.560 09:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.560 09:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.560 09:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59081 00:05:57.560 09:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59081 00:05:57.560 09:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.819 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59081 00:05:57.819 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59081 ']' 00:05:57.819 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59081 00:05:57.819 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.819 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.819 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59081 00:05:57.819 killing process with pid 59081 00:05:57.819 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.819 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.819 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59081' 00:05:57.820 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59081 00:05:57.820 09:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59081 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59097 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59097 ']' 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59097 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59097 00:06:04.395 killing process with pid 59097 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59097' 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59097 00:06:04.395 09:18:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59097 00:06:06.303 00:06:06.303 real 0m13.017s 00:06:06.303 user 0m13.337s 00:06:06.303 sys 0m1.350s 00:06:06.303 09:18:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.303 ************************************ 00:06:06.303 END TEST non_locking_app_on_locked_coremask 00:06:06.303 ************************************ 00:06:06.303 09:18:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.303 09:18:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:06.303 09:18:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.303 09:18:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.303 09:18:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.303 ************************************ 00:06:06.303 START TEST locking_app_on_unlocked_coremask 00:06:06.304 ************************************ 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59261 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:06.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59261 /var/tmp/spdk.sock 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59261 ']' 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.304 09:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 [2024-11-20 09:18:31.571681] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:06.304 [2024-11-20 09:18:31.571828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59261 ] 00:06:06.304 [2024-11-20 09:18:31.753647] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.304 [2024-11-20 09:18:31.753714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.564 [2024-11-20 09:18:31.901210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59288 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59288 /var/tmp/spdk2.sock 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59288 ']' 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.946 09:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.946 [2024-11-20 09:18:33.093030] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:07.946 [2024-11-20 09:18:33.093254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:06:07.946 [2024-11-20 09:18:33.272113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.206 [2024-11-20 09:18:33.569557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.760 09:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.760 09:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.761 09:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59288 00:06:10.761 09:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.761 09:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59288 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59261 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59261 ']' 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59261 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59261 00:06:10.761 killing process with pid 59261 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59261' 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59261 00:06:10.761 09:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59261 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59288 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59288 ']' 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59288 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59288 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59288' 00:06:17.350 killing process with pid 59288 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59288 00:06:17.350 09:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59288 00:06:19.260 00:06:19.260 real 0m12.980s 00:06:19.260 user 0m12.858s 00:06:19.260 sys 0m1.655s 00:06:19.260 ************************************ 00:06:19.260 END TEST locking_app_on_unlocked_coremask 00:06:19.260 ************************************ 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.260 09:18:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.260 09:18:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.260 09:18:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.260 09:18:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.260 ************************************ 00:06:19.260 START TEST locking_app_on_locked_coremask 00:06:19.260 ************************************ 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59443 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59443 /var/tmp/spdk.sock 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59443 ']' 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.260 09:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.260 [2024-11-20 09:18:44.605384] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:19.260 [2024-11-20 09:18:44.605628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59443 ] 00:06:19.519 [2024-11-20 09:18:44.783799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.520 [2024-11-20 09:18:44.934875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59464 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59464 /var/tmp/spdk2.sock 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59464 /var/tmp/spdk2.sock 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.899 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.899 09:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.899 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59464 /var/tmp/spdk2.sock 00:06:20.899 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59464 ']' 00:06:20.899 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.899 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.899 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.899 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.899 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.899 [2024-11-20 09:18:46.093707] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:20.899 [2024-11-20 09:18:46.093938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59464 ] 00:06:20.899 [2024-11-20 09:18:46.275803] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59443 has claimed it. 00:06:20.899 [2024-11-20 09:18:46.275890] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.469 ERROR: process (pid: 59464) is no longer running 00:06:21.469 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59464) - No such process 00:06:21.469 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.469 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:21.469 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:21.469 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.469 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.469 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.469 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59443 00:06:21.469 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59443 00:06:21.469 09:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.728 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59443 00:06:21.728 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59443 ']' 00:06:21.728 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59443 00:06:21.728 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.728 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.728 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59443 00:06:21.988 killing process with pid 59443 00:06:21.988 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.988 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.988 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59443' 00:06:21.988 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59443 00:06:21.988 09:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59443 00:06:25.298 00:06:25.298 real 0m5.504s 00:06:25.298 user 0m5.485s 00:06:25.298 sys 0m1.052s 00:06:25.298 ************************************ 00:06:25.298 END TEST locking_app_on_locked_coremask 00:06:25.298 ************************************ 00:06:25.298 09:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.298 09:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.298 09:18:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.298 09:18:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.298 09:18:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.298 09:18:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.298 ************************************ 00:06:25.298 START TEST locking_overlapped_coremask 00:06:25.298 ************************************ 00:06:25.298 09:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:25.298 09:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59539 00:06:25.298 09:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.298 09:18:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59539 /var/tmp/spdk.sock 00:06:25.298 09:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59539 ']' 00:06:25.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.299 09:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.299 09:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.299 09:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.299 09:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.299 09:18:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.299 [2024-11-20 09:18:50.188590] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:25.299 [2024-11-20 09:18:50.188848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59539 ] 00:06:25.299 [2024-11-20 09:18:50.373945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.299 [2024-11-20 09:18:50.529253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.299 [2024-11-20 09:18:50.529410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.299 [2024-11-20 09:18:50.529493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59563 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59563 /var/tmp/spdk2.sock 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59563 /var/tmp/spdk2.sock 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:26.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59563 /var/tmp/spdk2.sock 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59563 ']' 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.247 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.248 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.248 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.248 09:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.526 [2024-11-20 09:18:51.704732] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:26.526 [2024-11-20 09:18:51.704953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59563 ] 00:06:26.526 [2024-11-20 09:18:51.878896] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59539 has claimed it. 00:06:26.526 [2024-11-20 09:18:51.878979] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.095 ERROR: process (pid: 59563) is no longer running 00:06:27.095 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59563) - No such process 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59539 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59539 ']' 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59539 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59539 00:06:27.095 killing process with pid 59539 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59539' 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59539 00:06:27.095 09:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59539 00:06:30.388 00:06:30.388 real 0m5.048s 00:06:30.388 user 0m13.501s 00:06:30.388 sys 0m0.814s 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.388 ************************************ 00:06:30.388 END TEST locking_overlapped_coremask 00:06:30.388 ************************************ 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.388 09:18:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.388 09:18:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.388 09:18:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.388 09:18:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.388 ************************************ 00:06:30.388 START TEST locking_overlapped_coremask_via_rpc 00:06:30.388 ************************************ 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59631 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59631 /var/tmp/spdk.sock 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59631 ']' 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.388 09:18:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.388 [2024-11-20 09:18:55.294936] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:30.388 [2024-11-20 09:18:55.295549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59631 ] 00:06:30.388 [2024-11-20 09:18:55.453186] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.388 [2024-11-20 09:18:55.453365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.388 [2024-11-20 09:18:55.609612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.388 [2024-11-20 09:18:55.609751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.388 [2024-11-20 09:18:55.609799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59656 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59656 /var/tmp/spdk2.sock 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59656 ']' 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.354 09:18:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.613 [2024-11-20 09:18:56.841460] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:31.613 [2024-11-20 09:18:56.841687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59656 ] 00:06:31.613 [2024-11-20 09:18:57.021217] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.613 [2024-11-20 09:18:57.021306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.203 [2024-11-20 09:18:57.335937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.203 [2024-11-20 09:18:57.339719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.203 [2024-11-20 09:18:57.339721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.739 [2024-11-20 09:18:59.602693] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59631 has claimed it. 00:06:34.739 request: 00:06:34.739 { 00:06:34.739 "method": "framework_enable_cpumask_locks", 00:06:34.739 "req_id": 1 00:06:34.739 } 00:06:34.739 Got JSON-RPC error response 00:06:34.739 response: 00:06:34.739 { 00:06:34.739 "code": -32603, 00:06:34.739 "message": "Failed to claim CPU core: 2" 00:06:34.739 } 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59631 /var/tmp/spdk.sock 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59631 ']' 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59656 /var/tmp/spdk2.sock 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59656 ']' 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.739 09:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.739 09:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.739 09:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.739 09:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.739 09:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.739 09:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.739 09:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.739 00:06:34.739 real 0m4.900s 00:06:34.739 user 0m1.434s 00:06:34.739 sys 0m0.217s 00:06:34.739 09:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.739 09:19:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.739 ************************************ 00:06:34.739 END TEST locking_overlapped_coremask_via_rpc 00:06:34.739 ************************************ 00:06:34.739 09:19:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.739 09:19:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59631 ]] 00:06:34.739 09:19:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59631 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59631 ']' 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59631 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59631 00:06:34.739 killing process with pid 59631 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59631' 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59631 00:06:34.739 09:19:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59631 00:06:38.061 09:19:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59656 ]] 00:06:38.061 09:19:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59656 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59656 ']' 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59656 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59656 00:06:38.061 killing process with pid 59656 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59656' 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59656 00:06:38.061 09:19:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59656 00:06:41.350 09:19:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.350 09:19:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:41.350 09:19:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59631 ]] 00:06:41.350 09:19:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59631 00:06:41.350 09:19:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59631 ']' 00:06:41.350 09:19:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59631 00:06:41.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59631) - No such process 00:06:41.350 Process with pid 59631 is not found 00:06:41.350 09:19:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59631 is not found' 00:06:41.350 Process with pid 59656 is not found 00:06:41.350 09:19:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59656 ]] 00:06:41.350 09:19:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59656 00:06:41.350 09:19:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59656 ']' 00:06:41.350 09:19:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59656 00:06:41.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59656) - No such process 00:06:41.350 09:19:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59656 is not found' 00:06:41.350 09:19:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.350 00:06:41.350 real 0m57.606s 00:06:41.350 user 1m38.154s 00:06:41.350 sys 0m8.096s 00:06:41.350 09:19:06 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.350 ************************************ 00:06:41.350 END TEST cpu_locks 00:06:41.350 ************************************ 00:06:41.350 09:19:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.350 ************************************ 00:06:41.350 END TEST event 00:06:41.350 ************************************ 00:06:41.350 00:06:41.350 real 1m28.291s 00:06:41.350 user 2m38.254s 00:06:41.350 sys 0m12.264s 00:06:41.350 09:19:06 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.350 09:19:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.350 09:19:06 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:41.350 09:19:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.350 09:19:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.350 09:19:06 -- common/autotest_common.sh@10 -- # set +x 00:06:41.350 ************************************ 00:06:41.350 START TEST thread 00:06:41.350 ************************************ 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:41.350 * Looking for test storage... 00:06:41.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.350 09:19:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.350 09:19:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.350 09:19:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.350 09:19:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.350 09:19:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.350 09:19:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.350 09:19:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.350 09:19:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.350 09:19:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.350 09:19:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.350 09:19:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.350 09:19:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:41.350 09:19:06 thread -- scripts/common.sh@345 -- # : 1 00:06:41.350 09:19:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.350 09:19:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.350 09:19:06 thread -- scripts/common.sh@365 -- # decimal 1 00:06:41.350 09:19:06 thread -- scripts/common.sh@353 -- # local d=1 00:06:41.350 09:19:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.350 09:19:06 thread -- scripts/common.sh@355 -- # echo 1 00:06:41.350 09:19:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.350 09:19:06 thread -- scripts/common.sh@366 -- # decimal 2 00:06:41.350 09:19:06 thread -- scripts/common.sh@353 -- # local d=2 00:06:41.350 09:19:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.350 09:19:06 thread -- scripts/common.sh@355 -- # echo 2 00:06:41.350 09:19:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.350 09:19:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.350 09:19:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.350 09:19:06 thread -- scripts/common.sh@368 -- # return 0 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.350 --rc genhtml_branch_coverage=1 00:06:41.350 --rc genhtml_function_coverage=1 00:06:41.350 --rc genhtml_legend=1 00:06:41.350 --rc geninfo_all_blocks=1 00:06:41.350 --rc geninfo_unexecuted_blocks=1 00:06:41.350 00:06:41.350 ' 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.350 --rc genhtml_branch_coverage=1 00:06:41.350 --rc genhtml_function_coverage=1 00:06:41.350 --rc genhtml_legend=1 00:06:41.350 --rc geninfo_all_blocks=1 00:06:41.350 --rc geninfo_unexecuted_blocks=1 00:06:41.350 00:06:41.350 ' 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.350 --rc genhtml_branch_coverage=1 00:06:41.350 --rc genhtml_function_coverage=1 00:06:41.350 --rc genhtml_legend=1 00:06:41.350 --rc geninfo_all_blocks=1 00:06:41.350 --rc geninfo_unexecuted_blocks=1 00:06:41.350 00:06:41.350 ' 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.350 --rc genhtml_branch_coverage=1 00:06:41.350 --rc genhtml_function_coverage=1 00:06:41.350 --rc genhtml_legend=1 00:06:41.350 --rc geninfo_all_blocks=1 00:06:41.350 --rc geninfo_unexecuted_blocks=1 00:06:41.350 00:06:41.350 ' 00:06:41.350 09:19:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.350 09:19:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.350 ************************************ 00:06:41.350 START TEST thread_poller_perf 00:06:41.350 ************************************ 00:06:41.350 09:19:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:41.610 [2024-11-20 09:19:06.812093] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:41.610 [2024-11-20 09:19:06.812320] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:06:41.610 [2024-11-20 09:19:06.992992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.881 [2024-11-20 09:19:07.155137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.881 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.270 [2024-11-20T09:19:08.726Z] ====================================== 00:06:43.270 [2024-11-20T09:19:08.726Z] busy:2301214818 (cyc) 00:06:43.270 [2024-11-20T09:19:08.726Z] total_run_count: 318000 00:06:43.270 [2024-11-20T09:19:08.726Z] tsc_hz: 2290000000 (cyc) 00:06:43.270 [2024-11-20T09:19:08.726Z] ====================================== 00:06:43.270 [2024-11-20T09:19:08.726Z] poller_cost: 7236 (cyc), 3159 (nsec) 00:06:43.270 00:06:43.270 real 0m1.688s 00:06:43.270 user 0m1.470s 00:06:43.270 sys 0m0.108s 00:06:43.270 09:19:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.271 09:19:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.271 ************************************ 00:06:43.271 END TEST thread_poller_perf 00:06:43.271 ************************************ 00:06:43.271 09:19:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.271 09:19:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:43.271 09:19:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.271 09:19:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.271 ************************************ 00:06:43.271 START TEST thread_poller_perf 00:06:43.271 ************************************ 00:06:43.271 09:19:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.271 [2024-11-20 09:19:08.570825] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:43.271 [2024-11-20 09:19:08.570950] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59904 ] 00:06:43.530 [2024-11-20 09:19:08.748377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.530 [2024-11-20 09:19:08.924992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.530 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:44.921 [2024-11-20T09:19:10.377Z] ====================================== 00:06:44.921 [2024-11-20T09:19:10.377Z] busy:2295737170 (cyc) 00:06:44.921 [2024-11-20T09:19:10.378Z] total_run_count: 4384000 00:06:44.922 [2024-11-20T09:19:10.378Z] tsc_hz: 2290000000 (cyc) 00:06:44.922 [2024-11-20T09:19:10.378Z] ====================================== 00:06:44.922 [2024-11-20T09:19:10.378Z] poller_cost: 523 (cyc), 228 (nsec) 00:06:44.922 ************************************ 00:06:44.922 END TEST thread_poller_perf 00:06:44.922 ************************************ 00:06:44.922 00:06:44.922 real 0m1.658s 00:06:44.922 user 0m1.442s 00:06:44.922 sys 0m0.108s 00:06:44.922 09:19:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.922 09:19:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.922 09:19:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:44.922 ************************************ 00:06:44.922 END TEST thread 00:06:44.922 ************************************ 00:06:44.922 00:06:44.922 real 0m3.739s 00:06:44.922 user 0m3.105s 00:06:44.922 sys 0m0.430s 00:06:44.922 09:19:10 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.922 09:19:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.922 09:19:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:44.922 09:19:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:44.922 09:19:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.922 09:19:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.922 09:19:10 -- common/autotest_common.sh@10 -- # set +x 00:06:44.922 ************************************ 00:06:44.922 START TEST app_cmdline 00:06:44.922 ************************************ 00:06:44.922 09:19:10 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:45.184 * Looking for test storage... 00:06:45.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.184 09:19:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.184 --rc genhtml_branch_coverage=1 00:06:45.184 --rc genhtml_function_coverage=1 00:06:45.184 --rc genhtml_legend=1 00:06:45.184 --rc geninfo_all_blocks=1 00:06:45.184 --rc geninfo_unexecuted_blocks=1 00:06:45.184 00:06:45.184 ' 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.184 --rc genhtml_branch_coverage=1 00:06:45.184 --rc genhtml_function_coverage=1 00:06:45.184 --rc genhtml_legend=1 00:06:45.184 --rc geninfo_all_blocks=1 00:06:45.184 --rc geninfo_unexecuted_blocks=1 00:06:45.184 00:06:45.184 ' 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.184 --rc genhtml_branch_coverage=1 00:06:45.184 --rc genhtml_function_coverage=1 00:06:45.184 --rc genhtml_legend=1 00:06:45.184 --rc geninfo_all_blocks=1 00:06:45.184 --rc geninfo_unexecuted_blocks=1 00:06:45.184 00:06:45.184 ' 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.184 --rc genhtml_branch_coverage=1 00:06:45.184 --rc genhtml_function_coverage=1 00:06:45.184 --rc genhtml_legend=1 00:06:45.184 --rc geninfo_all_blocks=1 00:06:45.184 --rc geninfo_unexecuted_blocks=1 00:06:45.184 00:06:45.184 ' 00:06:45.184 09:19:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.184 09:19:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59993 00:06:45.184 09:19:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.184 09:19:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59993 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59993 ']' 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.184 09:19:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.184 [2024-11-20 09:19:10.636065] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:45.184 [2024-11-20 09:19:10.636299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59993 ] 00:06:45.443 [2024-11-20 09:19:10.814432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.702 [2024-11-20 09:19:10.943828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.640 09:19:11 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.640 09:19:11 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:46.640 09:19:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:46.899 { 00:06:46.899 "version": "SPDK v25.01-pre git sha1 2741dd1ac", 00:06:46.899 "fields": { 00:06:46.899 "major": 25, 00:06:46.899 "minor": 1, 00:06:46.899 "patch": 0, 00:06:46.899 "suffix": "-pre", 00:06:46.899 "commit": "2741dd1ac" 00:06:46.899 } 00:06:46.899 } 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.899 09:19:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:46.899 09:19:12 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.159 request: 00:06:47.159 { 00:06:47.159 "method": "env_dpdk_get_mem_stats", 00:06:47.159 "req_id": 1 00:06:47.159 } 00:06:47.159 Got JSON-RPC error response 00:06:47.159 response: 00:06:47.159 { 00:06:47.159 "code": -32601, 00:06:47.159 "message": "Method not found" 00:06:47.159 } 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.159 09:19:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59993 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59993 ']' 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59993 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59993 00:06:47.159 killing process with pid 59993 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59993' 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@973 -- # kill 59993 00:06:47.159 09:19:12 app_cmdline -- common/autotest_common.sh@978 -- # wait 59993 00:06:50.465 00:06:50.465 real 0m4.884s 00:06:50.465 user 0m5.219s 00:06:50.465 sys 0m0.645s 00:06:50.465 09:19:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.465 09:19:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.465 ************************************ 00:06:50.465 END TEST app_cmdline 00:06:50.465 ************************************ 00:06:50.465 09:19:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:50.465 09:19:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.465 09:19:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.465 09:19:15 -- common/autotest_common.sh@10 -- # set +x 00:06:50.465 ************************************ 00:06:50.465 START TEST version 00:06:50.465 ************************************ 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:50.465 * Looking for test storage... 00:06:50.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.465 09:19:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.465 09:19:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.465 09:19:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.465 09:19:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.465 09:19:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.465 09:19:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.465 09:19:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.465 09:19:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.465 09:19:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.465 09:19:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.465 09:19:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.465 09:19:15 version -- scripts/common.sh@344 -- # case "$op" in 00:06:50.465 09:19:15 version -- scripts/common.sh@345 -- # : 1 00:06:50.465 09:19:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.465 09:19:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.465 09:19:15 version -- scripts/common.sh@365 -- # decimal 1 00:06:50.465 09:19:15 version -- scripts/common.sh@353 -- # local d=1 00:06:50.465 09:19:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.465 09:19:15 version -- scripts/common.sh@355 -- # echo 1 00:06:50.465 09:19:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.465 09:19:15 version -- scripts/common.sh@366 -- # decimal 2 00:06:50.465 09:19:15 version -- scripts/common.sh@353 -- # local d=2 00:06:50.465 09:19:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.465 09:19:15 version -- scripts/common.sh@355 -- # echo 2 00:06:50.465 09:19:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.465 09:19:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.465 09:19:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.465 09:19:15 version -- scripts/common.sh@368 -- # return 0 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.465 --rc genhtml_branch_coverage=1 00:06:50.465 --rc genhtml_function_coverage=1 00:06:50.465 --rc genhtml_legend=1 00:06:50.465 --rc geninfo_all_blocks=1 00:06:50.465 --rc geninfo_unexecuted_blocks=1 00:06:50.465 00:06:50.465 ' 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.465 --rc genhtml_branch_coverage=1 00:06:50.465 --rc genhtml_function_coverage=1 00:06:50.465 --rc genhtml_legend=1 00:06:50.465 --rc geninfo_all_blocks=1 00:06:50.465 --rc geninfo_unexecuted_blocks=1 00:06:50.465 00:06:50.465 ' 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.465 --rc genhtml_branch_coverage=1 00:06:50.465 --rc genhtml_function_coverage=1 00:06:50.465 --rc genhtml_legend=1 00:06:50.465 --rc geninfo_all_blocks=1 00:06:50.465 --rc geninfo_unexecuted_blocks=1 00:06:50.465 00:06:50.465 ' 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.465 --rc genhtml_branch_coverage=1 00:06:50.465 --rc genhtml_function_coverage=1 00:06:50.465 --rc genhtml_legend=1 00:06:50.465 --rc geninfo_all_blocks=1 00:06:50.465 --rc geninfo_unexecuted_blocks=1 00:06:50.465 00:06:50.465 ' 00:06:50.465 09:19:15 version -- app/version.sh@17 -- # get_header_version major 00:06:50.465 09:19:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:50.465 09:19:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.465 09:19:15 version -- app/version.sh@14 -- # cut -f2 00:06:50.465 09:19:15 version -- app/version.sh@17 -- # major=25 00:06:50.465 09:19:15 version -- app/version.sh@18 -- # get_header_version minor 00:06:50.465 09:19:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:50.465 09:19:15 version -- app/version.sh@14 -- # cut -f2 00:06:50.465 09:19:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.465 09:19:15 version -- app/version.sh@18 -- # minor=1 00:06:50.465 09:19:15 version -- app/version.sh@19 -- # get_header_version patch 00:06:50.465 09:19:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:50.465 09:19:15 version -- app/version.sh@14 -- # cut -f2 00:06:50.465 09:19:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.465 09:19:15 version -- app/version.sh@19 -- # patch=0 00:06:50.465 09:19:15 version -- app/version.sh@20 -- # get_header_version suffix 00:06:50.465 09:19:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:50.465 09:19:15 version -- app/version.sh@14 -- # cut -f2 00:06:50.465 09:19:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.465 09:19:15 version -- app/version.sh@20 -- # suffix=-pre 00:06:50.465 09:19:15 version -- app/version.sh@22 -- # version=25.1 00:06:50.465 09:19:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:50.465 09:19:15 version -- app/version.sh@28 -- # version=25.1rc0 00:06:50.465 09:19:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:50.465 09:19:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:50.465 09:19:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:50.465 09:19:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:50.465 00:06:50.465 real 0m0.335s 00:06:50.465 user 0m0.199s 00:06:50.465 sys 0m0.194s 00:06:50.465 09:19:15 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.465 09:19:15 version -- common/autotest_common.sh@10 -- # set +x 00:06:50.465 ************************************ 00:06:50.465 END TEST version 00:06:50.465 ************************************ 00:06:50.465 09:19:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:50.465 09:19:15 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:50.465 09:19:15 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:50.465 09:19:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.465 09:19:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.465 09:19:15 -- common/autotest_common.sh@10 -- # set +x 00:06:50.465 ************************************ 00:06:50.465 START TEST bdev_raid 00:06:50.465 ************************************ 00:06:50.465 09:19:15 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:50.465 * Looking for test storage... 00:06:50.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:50.465 09:19:15 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.465 09:19:15 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.465 09:19:15 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.465 09:19:15 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.466 09:19:15 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:50.466 09:19:15 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.466 09:19:15 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.466 --rc genhtml_branch_coverage=1 00:06:50.466 --rc genhtml_function_coverage=1 00:06:50.466 --rc genhtml_legend=1 00:06:50.466 --rc geninfo_all_blocks=1 00:06:50.466 --rc geninfo_unexecuted_blocks=1 00:06:50.466 00:06:50.466 ' 00:06:50.466 09:19:15 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.466 --rc genhtml_branch_coverage=1 00:06:50.466 --rc genhtml_function_coverage=1 00:06:50.466 --rc genhtml_legend=1 00:06:50.466 --rc geninfo_all_blocks=1 00:06:50.466 --rc geninfo_unexecuted_blocks=1 00:06:50.466 00:06:50.466 ' 00:06:50.466 09:19:15 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.466 --rc genhtml_branch_coverage=1 00:06:50.466 --rc genhtml_function_coverage=1 00:06:50.466 --rc genhtml_legend=1 00:06:50.466 --rc geninfo_all_blocks=1 00:06:50.466 --rc geninfo_unexecuted_blocks=1 00:06:50.466 00:06:50.466 ' 00:06:50.466 09:19:15 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.466 --rc genhtml_branch_coverage=1 00:06:50.466 --rc genhtml_function_coverage=1 00:06:50.466 --rc genhtml_legend=1 00:06:50.466 --rc geninfo_all_blocks=1 00:06:50.466 --rc geninfo_unexecuted_blocks=1 00:06:50.466 00:06:50.466 ' 00:06:50.466 09:19:15 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:50.466 09:19:15 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:50.466 09:19:15 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:50.466 09:19:15 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:50.466 09:19:15 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:50.466 09:19:15 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:50.466 09:19:15 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:50.466 09:19:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.466 09:19:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.466 09:19:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.466 ************************************ 00:06:50.466 START TEST raid1_resize_data_offset_test 00:06:50.466 ************************************ 00:06:50.466 Process raid pid: 60186 00:06:50.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60186 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60186' 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60186 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60186 ']' 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.466 09:19:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.725 [2024-11-20 09:19:16.003830] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:50.725 [2024-11-20 09:19:16.004078] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.725 [2024-11-20 09:19:16.175613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.983 [2024-11-20 09:19:16.310601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.242 [2024-11-20 09:19:16.543214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.242 [2024-11-20 09:19:16.543335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.500 09:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.500 09:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:51.500 09:19:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:51.500 09:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.500 09:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.760 malloc0 00:06:51.760 09:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.760 09:19:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:51.760 09:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.760 09:19:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.760 malloc1 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.760 null0 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.760 [2024-11-20 09:19:17.089239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:51.760 [2024-11-20 09:19:17.091512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:51.760 [2024-11-20 09:19:17.091634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:51.760 [2024-11-20 09:19:17.091837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:51.760 [2024-11-20 09:19:17.091888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:51.760 [2024-11-20 09:19:17.092287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:51.760 [2024-11-20 09:19:17.092577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:51.760 [2024-11-20 09:19:17.092637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:51.760 [2024-11-20 09:19:17.092897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.760 [2024-11-20 09:19:17.149163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.760 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.332 malloc2 00:06:52.333 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.333 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:52.333 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.333 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.333 [2024-11-20 09:19:17.752745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:52.333 [2024-11-20 09:19:17.771303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:52.333 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.333 [2024-11-20 09:19:17.773474] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:52.333 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.333 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:52.333 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.333 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60186 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60186 ']' 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60186 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60186 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.598 killing process with pid 60186 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60186' 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60186 00:06:52.598 09:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60186 00:06:52.598 [2024-11-20 09:19:17.860798] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.598 [2024-11-20 09:19:17.861137] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:52.598 [2024-11-20 09:19:17.861283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.598 [2024-11-20 09:19:17.861305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:52.598 [2024-11-20 09:19:17.903707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.598 [2024-11-20 09:19:17.904125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.598 [2024-11-20 09:19:17.904175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:54.504 [2024-11-20 09:19:19.904313] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.884 09:19:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:55.884 00:06:55.884 real 0m5.215s 00:06:55.884 user 0m5.167s 00:06:55.884 sys 0m0.542s 00:06:55.884 ************************************ 00:06:55.884 END TEST raid1_resize_data_offset_test 00:06:55.884 ************************************ 00:06:55.884 09:19:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.884 09:19:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 09:19:21 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:55.884 09:19:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.884 09:19:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.884 09:19:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 ************************************ 00:06:55.884 START TEST raid0_resize_superblock_test 00:06:55.884 ************************************ 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60281 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60281' 00:06:55.884 Process raid pid: 60281 00:06:55.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60281 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60281 ']' 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.884 09:19:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 [2024-11-20 09:19:21.283576] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:06:55.884 [2024-11-20 09:19:21.284259] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.143 [2024-11-20 09:19:21.464798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.143 [2024-11-20 09:19:21.590937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.402 [2024-11-20 09:19:21.814377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.402 [2024-11-20 09:19:21.814538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.972 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.972 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.972 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:56.972 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.972 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.541 malloc0 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.541 [2024-11-20 09:19:22.797170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:57.541 [2024-11-20 09:19:22.797373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.541 [2024-11-20 09:19:22.797446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:57.541 [2024-11-20 09:19:22.797492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.541 [2024-11-20 09:19:22.799813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.541 [2024-11-20 09:19:22.799927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:57.541 pt0 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.541 0a169bfd-4022-4a74-8c7b-c33dcbc31ad0 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.541 ba77820b-c8a1-4377-87eb-b8e31e61fd77 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.541 4ae9ed49-917c-405d-a415-3962e2905533 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.541 [2024-11-20 09:19:22.933135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ba77820b-c8a1-4377-87eb-b8e31e61fd77 is claimed 00:06:57.541 [2024-11-20 09:19:22.933401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4ae9ed49-917c-405d-a415-3962e2905533 is claimed 00:06:57.541 [2024-11-20 09:19:22.933648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:57.541 [2024-11-20 09:19:22.933706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:57.541 [2024-11-20 09:19:22.934055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:57.541 [2024-11-20 09:19:22.934321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:57.541 [2024-11-20 09:19:22.934372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:57.541 [2024-11-20 09:19:22.934642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.541 09:19:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:57.801 09:19:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:57.801 [2024-11-20 09:19:23.025209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.801 [2024-11-20 09:19:23.053161] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.801 [2024-11-20 09:19:23.053199] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ba77820b-c8a1-4377-87eb-b8e31e61fd77' was resized: old size 131072, new size 204800 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.801 [2024-11-20 09:19:23.061035] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.801 [2024-11-20 09:19:23.061069] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4ae9ed49-917c-405d-a415-3962e2905533' was resized: old size 131072, new size 204800 00:06:57.801 [2024-11-20 09:19:23.061102] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:57.801 [2024-11-20 09:19:23.153000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.801 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.801 [2024-11-20 09:19:23.192693] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:57.801 [2024-11-20 09:19:23.192793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:57.801 [2024-11-20 09:19:23.192808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:57.801 [2024-11-20 09:19:23.192827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:57.801 [2024-11-20 09:19:23.192966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.801 [2024-11-20 09:19:23.193006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.801 [2024-11-20 09:19:23.193018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.802 [2024-11-20 09:19:23.200568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:57.802 [2024-11-20 09:19:23.200738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.802 [2024-11-20 09:19:23.200769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:57.802 [2024-11-20 09:19:23.200783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.802 [2024-11-20 09:19:23.203389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.802 [2024-11-20 09:19:23.203451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:57.802 [2024-11-20 09:19:23.205352] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ba77820b-c8a1-4377-87eb-b8e31e61fd77 00:06:57.802 [2024-11-20 09:19:23.205473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ba77820b-c8a1-4377-87eb-b8e31e61fd77 is claimed 00:06:57.802 pt0 00:06:57.802 [2024-11-20 09:19:23.205630] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4ae9ed49-917c-405d-a415-3962e2905533 00:06:57.802 [2024-11-20 09:19:23.205659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4ae9ed49-917c-405d-a415-3962e2905533 is claimed 00:06:57.802 [2024-11-20 09:19:23.205857] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4ae9ed49-917c-405d-a415-3962e2905533 (2) smaller than existing raid bdev Raid (3) 00:06:57.802 [2024-11-20 09:19:23.205886] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ba77820b-c8a1-4377-87eb-b8e31e61fd77: File exists 00:06:57.802 [2024-11-20 09:19:23.205924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:57.802 [2024-11-20 09:19:23.205937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:57.802 [2024-11-20 09:19:23.206224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:57.802 [2024-11-20 09:19:23.206404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:57.802 [2024-11-20 09:19:23.206415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:57.802 [2024-11-20 09:19:23.206624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:57.802 [2024-11-20 09:19:23.229720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.802 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.061 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.061 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.061 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60281 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60281 ']' 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60281 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60281 00:06:58.062 killing process with pid 60281 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60281' 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60281 00:06:58.062 09:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60281 00:06:58.062 [2024-11-20 09:19:23.307311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.062 [2024-11-20 09:19:23.307413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.062 [2024-11-20 09:19:23.307477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.062 [2024-11-20 09:19:23.307487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:59.966 [2024-11-20 09:19:24.925177] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.904 09:19:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:00.904 00:07:00.904 real 0m4.960s 00:07:00.904 user 0m5.145s 00:07:00.904 sys 0m0.591s 00:07:00.904 09:19:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.904 09:19:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.904 ************************************ 00:07:00.904 END TEST raid0_resize_superblock_test 00:07:00.904 ************************************ 00:07:00.904 09:19:26 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:00.904 09:19:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.904 09:19:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.904 09:19:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.904 ************************************ 00:07:00.904 START TEST raid1_resize_superblock_test 00:07:00.904 ************************************ 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60379 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60379' 00:07:00.904 Process raid pid: 60379 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60379 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60379 ']' 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.904 09:19:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.904 [2024-11-20 09:19:26.306343] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:00.904 [2024-11-20 09:19:26.306618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.164 [2024-11-20 09:19:26.484853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.164 [2024-11-20 09:19:26.614089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.424 [2024-11-20 09:19:26.842994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.424 [2024-11-20 09:19:26.843043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.992 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.992 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:01.992 09:19:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:01.992 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.992 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.560 malloc0 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.560 [2024-11-20 09:19:27.813397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:02.560 [2024-11-20 09:19:27.813490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.560 [2024-11-20 09:19:27.813518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:02.560 [2024-11-20 09:19:27.813534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.560 [2024-11-20 09:19:27.816008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.560 [2024-11-20 09:19:27.816055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:02.560 pt0 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.560 4dbbef85-8087-4747-82ad-3af0bce9352f 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.560 5fb52cc5-c918-4340-a2bc-0776d7c743a4 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.560 32d156e1-17f2-4ef0-8ea4-5c4bbe0dd3e8 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.560 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.560 [2024-11-20 09:19:27.945171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5fb52cc5-c918-4340-a2bc-0776d7c743a4 is claimed 00:07:02.560 [2024-11-20 09:19:27.945308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 32d156e1-17f2-4ef0-8ea4-5c4bbe0dd3e8 is claimed 00:07:02.561 [2024-11-20 09:19:27.945492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:02.561 [2024-11-20 09:19:27.945518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:02.561 [2024-11-20 09:19:27.945844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:02.561 [2024-11-20 09:19:27.946094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:02.561 [2024-11-20 09:19:27.946116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:02.561 [2024-11-20 09:19:27.946325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.561 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.561 09:19:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:02.561 09:19:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:02.561 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.561 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.561 09:19:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.561 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:02.561 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:02.561 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:02.561 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.561 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.819 [2024-11-20 09:19:28.065241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.819 [2024-11-20 09:19:28.109184] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:02.819 [2024-11-20 09:19:28.109233] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5fb52cc5-c918-4340-a2bc-0776d7c743a4' was resized: old size 131072, new size 204800 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.819 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.819 [2024-11-20 09:19:28.121073] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:02.819 [2024-11-20 09:19:28.121119] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '32d156e1-17f2-4ef0-8ea4-5c4bbe0dd3e8' was resized: old size 131072, new size 204800 00:07:02.819 [2024-11-20 09:19:28.121157] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:02.820 [2024-11-20 09:19:28.232970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.820 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.079 [2024-11-20 09:19:28.280683] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:03.079 [2024-11-20 09:19:28.280783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:03.079 [2024-11-20 09:19:28.280820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:03.079 [2024-11-20 09:19:28.281003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.079 [2024-11-20 09:19:28.281250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.079 [2024-11-20 09:19:28.281335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.079 [2024-11-20 09:19:28.281364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.079 [2024-11-20 09:19:28.288560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:03.079 [2024-11-20 09:19:28.288640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.079 [2024-11-20 09:19:28.288672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:03.079 [2024-11-20 09:19:28.288693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.079 [2024-11-20 09:19:28.291240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.079 [2024-11-20 09:19:28.291291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:03.079 [2024-11-20 09:19:28.293369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5fb52cc5-c918-4340-a2bc-0776d7c743a4 00:07:03.079 [2024-11-20 09:19:28.293479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5fb52cc5-c918-4340-a2bc-0776d7c743a4 is claimed 00:07:03.079 [2024-11-20 09:19:28.293631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 32d156e1-17f2-4ef0-8ea4-5c4bbe0dd3e8 00:07:03.079 [2024-11-20 09:19:28.293663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 32d156e1-17f2-4ef0-8ea4-5c4bbe0dd3e8 is claimed 00:07:03.079 [2024-11-20 09:19:28.293854] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 32d156e1-17f2-4ef0-8ea4-5c4bbe0dd3e8 (2) smaller than existing raid bdev Raid (3) 00:07:03.079 [2024-11-20 09:19:28.293885] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 5fb52cc5-c918-4340-a2bc-0776d7c743a4: File exists 00:07:03.079 [2024-11-20 09:19:28.293922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:03.079 [2024-11-20 09:19:28.293940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:03.079 pt0 00:07:03.079 [2024-11-20 09:19:28.294222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:03.079 [2024-11-20 09:19:28.294414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:03.079 [2024-11-20 09:19:28.294425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:03.079 [2024-11-20 09:19:28.294632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.079 [2024-11-20 09:19:28.313739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60379 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60379 ']' 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60379 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60379 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60379' 00:07:03.079 killing process with pid 60379 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60379 00:07:03.079 [2024-11-20 09:19:28.394074] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.079 09:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60379 00:07:03.079 [2024-11-20 09:19:28.394187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.079 [2024-11-20 09:19:28.394252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.079 [2024-11-20 09:19:28.394269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:04.984 [2024-11-20 09:19:30.019754] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.921 09:19:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:05.921 00:07:05.921 real 0m5.060s 00:07:05.921 user 0m5.291s 00:07:05.921 sys 0m0.615s 00:07:05.921 09:19:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.921 09:19:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.921 ************************************ 00:07:05.921 END TEST raid1_resize_superblock_test 00:07:05.922 ************************************ 00:07:05.922 09:19:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:05.922 09:19:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:05.922 09:19:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:05.922 09:19:31 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:05.922 09:19:31 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:05.922 09:19:31 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:05.922 09:19:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.922 09:19:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.922 09:19:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.922 ************************************ 00:07:05.922 START TEST raid_function_test_raid0 00:07:05.922 ************************************ 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60482 00:07:05.922 Process raid pid: 60482 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60482' 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60482 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60482 ']' 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.922 09:19:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:06.182 [2024-11-20 09:19:31.451377] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:06.182 [2024-11-20 09:19:31.451523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.182 [2024-11-20 09:19:31.629460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.441 [2024-11-20 09:19:31.755032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.702 [2024-11-20 09:19:31.972374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.702 [2024-11-20 09:19:31.972440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.962 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.962 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:06.962 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:06.962 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.962 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:06.962 Base_1 00:07:06.962 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.962 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:06.962 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.962 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.271 Base_2 00:07:07.271 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.271 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:07.271 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.271 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.272 [2024-11-20 09:19:32.427844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:07.272 [2024-11-20 09:19:32.429975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:07.272 [2024-11-20 09:19:32.430062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.272 [2024-11-20 09:19:32.430077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:07.272 [2024-11-20 09:19:32.430400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.272 [2024-11-20 09:19:32.430598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.272 [2024-11-20 09:19:32.430616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:07.272 [2024-11-20 09:19:32.430811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.272 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:07.531 [2024-11-20 09:19:32.719452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:07.531 /dev/nbd0 00:07:07.531 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.531 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.531 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.531 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.532 1+0 records in 00:07:07.532 1+0 records out 00:07:07.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488916 s, 8.4 MB/s 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.532 09:19:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:07.791 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.791 { 00:07:07.791 "nbd_device": "/dev/nbd0", 00:07:07.791 "bdev_name": "raid" 00:07:07.791 } 00:07:07.791 ]' 00:07:07.791 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.791 { 00:07:07.791 "nbd_device": "/dev/nbd0", 00:07:07.791 "bdev_name": "raid" 00:07:07.791 } 00:07:07.791 ]' 00:07:07.791 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.791 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:07.791 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:07.792 4096+0 records in 00:07:07.792 4096+0 records out 00:07:07.792 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0261437 s, 80.2 MB/s 00:07:07.792 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:08.052 4096+0 records in 00:07:08.052 4096+0 records out 00:07:08.052 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.242076 s, 8.7 MB/s 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:08.052 128+0 records in 00:07:08.052 128+0 records out 00:07:08.052 65536 bytes (66 kB, 64 KiB) copied, 0.000953625 s, 68.7 MB/s 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:08.052 2035+0 records in 00:07:08.052 2035+0 records out 00:07:08.052 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0151267 s, 68.9 MB/s 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:08.052 456+0 records in 00:07:08.052 456+0 records out 00:07:08.052 233472 bytes (233 kB, 228 KiB) copied, 0.00386139 s, 60.5 MB/s 00:07:08.052 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.312 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.572 [2024-11-20 09:19:33.770037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.572 09:19:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:08.572 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.572 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.572 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60482 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60482 ']' 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60482 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.831 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60482 00:07:08.832 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.832 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.832 killing process with pid 60482 00:07:08.832 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60482' 00:07:08.832 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60482 00:07:08.832 [2024-11-20 09:19:34.115329] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.832 [2024-11-20 09:19:34.115466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.832 [2024-11-20 09:19:34.115522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.832 09:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60482 00:07:08.832 [2024-11-20 09:19:34.115544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:09.091 [2024-11-20 09:19:34.350490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.472 09:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:10.472 00:07:10.472 real 0m4.261s 00:07:10.472 user 0m5.011s 00:07:10.472 sys 0m1.031s 00:07:10.472 09:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.472 09:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:10.472 ************************************ 00:07:10.472 END TEST raid_function_test_raid0 00:07:10.472 ************************************ 00:07:10.472 09:19:35 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:10.472 09:19:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.472 09:19:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.472 09:19:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.472 ************************************ 00:07:10.472 START TEST raid_function_test_concat 00:07:10.472 ************************************ 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60617 00:07:10.472 Process raid pid: 60617 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60617' 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60617 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60617 ']' 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.472 09:19:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:10.472 [2024-11-20 09:19:35.789197] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:10.472 [2024-11-20 09:19:35.789336] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.731 [2024-11-20 09:19:35.971343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.731 [2024-11-20 09:19:36.106208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.990 [2024-11-20 09:19:36.335179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.990 [2024-11-20 09:19:36.335237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.250 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.250 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:11.250 09:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:11.250 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.250 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:11.508 Base_1 00:07:11.508 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.508 09:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:11.509 Base_2 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:11.509 [2024-11-20 09:19:36.786496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:11.509 [2024-11-20 09:19:36.788636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:11.509 [2024-11-20 09:19:36.788740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:11.509 [2024-11-20 09:19:36.788754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:11.509 [2024-11-20 09:19:36.789093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:11.509 [2024-11-20 09:19:36.789285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:11.509 [2024-11-20 09:19:36.789303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:11.509 [2024-11-20 09:19:36.789534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:11.509 09:19:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:11.773 [2024-11-20 09:19:37.074068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:11.773 /dev/nbd0 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.773 1+0 records in 00:07:11.773 1+0 records out 00:07:11.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519343 s, 7.9 MB/s 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:11.773 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.054 { 00:07:12.054 "nbd_device": "/dev/nbd0", 00:07:12.054 "bdev_name": "raid" 00:07:12.054 } 00:07:12.054 ]' 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.054 { 00:07:12.054 "nbd_device": "/dev/nbd0", 00:07:12.054 "bdev_name": "raid" 00:07:12.054 } 00:07:12.054 ]' 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:12.054 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:12.313 4096+0 records in 00:07:12.313 4096+0 records out 00:07:12.313 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0315705 s, 66.4 MB/s 00:07:12.313 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:12.573 4096+0 records in 00:07:12.573 4096+0 records out 00:07:12.573 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.24569 s, 8.5 MB/s 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:12.573 128+0 records in 00:07:12.573 128+0 records out 00:07:12.573 65536 bytes (66 kB, 64 KiB) copied, 0.000359904 s, 182 MB/s 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:12.573 2035+0 records in 00:07:12.573 2035+0 records out 00:07:12.573 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00943513 s, 110 MB/s 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:12.573 456+0 records in 00:07:12.573 456+0 records out 00:07:12.573 233472 bytes (233 kB, 228 KiB) copied, 0.00367828 s, 63.5 MB/s 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.573 09:19:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.833 [2024-11-20 09:19:38.179986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:12.833 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.092 09:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:13.093 09:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:13.093 09:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60617 00:07:13.093 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60617 ']' 00:07:13.093 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60617 00:07:13.093 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:13.093 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.093 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60617 00:07:13.351 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.351 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.351 killing process with pid 60617 00:07:13.351 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60617' 00:07:13.351 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60617 00:07:13.351 [2024-11-20 09:19:38.572313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.351 [2024-11-20 09:19:38.572467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.351 [2024-11-20 09:19:38.572532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.351 09:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60617 00:07:13.351 [2024-11-20 09:19:38.572545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:13.610 [2024-11-20 09:19:38.817246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.989 09:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:14.989 00:07:14.989 real 0m4.412s 00:07:14.989 user 0m5.225s 00:07:14.989 sys 0m1.071s 00:07:14.989 09:19:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.989 09:19:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 ************************************ 00:07:14.989 END TEST raid_function_test_concat 00:07:14.989 ************************************ 00:07:14.989 09:19:40 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:14.989 09:19:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.989 09:19:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.989 09:19:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 ************************************ 00:07:14.989 START TEST raid0_resize_test 00:07:14.989 ************************************ 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60751 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60751' 00:07:14.989 Process raid pid: 60751 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60751 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60751 ']' 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.989 09:19:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.990 09:19:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.990 [2024-11-20 09:19:40.272326] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:14.990 [2024-11-20 09:19:40.272483] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.249 [2024-11-20 09:19:40.452931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.249 [2024-11-20 09:19:40.577259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.508 [2024-11-20 09:19:40.807003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.508 [2024-11-20 09:19:40.807069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.768 Base_1 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.768 Base_2 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.768 [2024-11-20 09:19:41.189135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:15.768 [2024-11-20 09:19:41.191268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:15.768 [2024-11-20 09:19:41.191364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:15.768 [2024-11-20 09:19:41.191380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:15.768 [2024-11-20 09:19:41.191736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:15.768 [2024-11-20 09:19:41.191918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:15.768 [2024-11-20 09:19:41.191940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:15.768 [2024-11-20 09:19:41.192202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.768 [2024-11-20 09:19:41.197091] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.768 [2024-11-20 09:19:41.197140] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:15.768 true 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:15.768 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.768 [2024-11-20 09:19:41.213312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.028 [2024-11-20 09:19:41.261010] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:16.028 [2024-11-20 09:19:41.261070] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:16.028 [2024-11-20 09:19:41.261106] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:16.028 true 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:16.028 [2024-11-20 09:19:41.273192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60751 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60751 ']' 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60751 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60751 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60751' 00:07:16.028 killing process with pid 60751 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60751 00:07:16.028 [2024-11-20 09:19:41.365926] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.028 [2024-11-20 09:19:41.366067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.028 09:19:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60751 00:07:16.028 [2024-11-20 09:19:41.366146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.028 [2024-11-20 09:19:41.366159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:16.028 [2024-11-20 09:19:41.387718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.408 09:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:17.408 00:07:17.408 real 0m2.438s 00:07:17.408 user 0m2.609s 00:07:17.408 sys 0m0.377s 00:07:17.408 09:19:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.408 09:19:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.408 ************************************ 00:07:17.408 END TEST raid0_resize_test 00:07:17.408 ************************************ 00:07:17.408 09:19:42 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:17.408 09:19:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.408 09:19:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.408 09:19:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.408 ************************************ 00:07:17.408 START TEST raid1_resize_test 00:07:17.408 ************************************ 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60807 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.408 Process raid pid: 60807 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60807' 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60807 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60807 ']' 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.408 09:19:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.408 [2024-11-20 09:19:42.789269] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:17.408 [2024-11-20 09:19:42.789408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.668 [2024-11-20 09:19:42.967844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.668 [2024-11-20 09:19:43.097548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.927 [2024-11-20 09:19:43.322307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.927 [2024-11-20 09:19:43.322360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.496 Base_1 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.496 Base_2 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.496 [2024-11-20 09:19:43.691435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:18.496 [2024-11-20 09:19:43.693557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:18.496 [2024-11-20 09:19:43.693659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:18.496 [2024-11-20 09:19:43.693675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:18.496 [2024-11-20 09:19:43.693988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:18.496 [2024-11-20 09:19:43.694158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:18.496 [2024-11-20 09:19:43.694179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:18.496 [2024-11-20 09:19:43.694386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.496 [2024-11-20 09:19:43.703388] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:18.496 [2024-11-20 09:19:43.703444] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:18.496 true 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.496 [2024-11-20 09:19:43.719609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.496 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.496 [2024-11-20 09:19:43.767319] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:18.496 [2024-11-20 09:19:43.767366] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:18.496 [2024-11-20 09:19:43.767407] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:18.496 true 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:18.497 [2024-11-20 09:19:43.779568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60807 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60807 ']' 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60807 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60807 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.497 killing process with pid 60807 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60807' 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60807 00:07:18.497 [2024-11-20 09:19:43.866715] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.497 [2024-11-20 09:19:43.866846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.497 09:19:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60807 00:07:18.497 [2024-11-20 09:19:43.867408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.497 [2024-11-20 09:19:43.867462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:18.497 [2024-11-20 09:19:43.887070] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.875 09:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:19.875 00:07:19.875 real 0m2.392s 00:07:19.875 user 0m2.556s 00:07:19.875 sys 0m0.360s 00:07:19.875 09:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.875 09:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.875 ************************************ 00:07:19.875 END TEST raid1_resize_test 00:07:19.875 ************************************ 00:07:19.875 09:19:45 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:19.875 09:19:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:19.875 09:19:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:19.875 09:19:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:19.875 09:19:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.875 09:19:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.875 ************************************ 00:07:19.875 START TEST raid_state_function_test 00:07:19.875 ************************************ 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60870 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60870' 00:07:19.875 Process raid pid: 60870 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60870 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60870 ']' 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.875 09:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.876 09:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.876 09:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.876 09:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.876 [2024-11-20 09:19:45.253421] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:19.876 [2024-11-20 09:19:45.253571] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.135 [2024-11-20 09:19:45.414774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.135 [2024-11-20 09:19:45.539030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.393 [2024-11-20 09:19:45.760848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.393 [2024-11-20 09:19:45.760932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.961 [2024-11-20 09:19:46.136339] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.961 [2024-11-20 09:19:46.136430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.961 [2024-11-20 09:19:46.136459] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.961 [2024-11-20 09:19:46.136475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.961 "name": "Existed_Raid", 00:07:20.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.961 "strip_size_kb": 64, 00:07:20.961 "state": "configuring", 00:07:20.961 "raid_level": "raid0", 00:07:20.961 "superblock": false, 00:07:20.961 "num_base_bdevs": 2, 00:07:20.961 "num_base_bdevs_discovered": 0, 00:07:20.961 "num_base_bdevs_operational": 2, 00:07:20.961 "base_bdevs_list": [ 00:07:20.961 { 00:07:20.961 "name": "BaseBdev1", 00:07:20.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.961 "is_configured": false, 00:07:20.961 "data_offset": 0, 00:07:20.961 "data_size": 0 00:07:20.961 }, 00:07:20.961 { 00:07:20.961 "name": "BaseBdev2", 00:07:20.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.961 "is_configured": false, 00:07:20.961 "data_offset": 0, 00:07:20.961 "data_size": 0 00:07:20.961 } 00:07:20.961 ] 00:07:20.961 }' 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.961 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 [2024-11-20 09:19:46.595710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.221 [2024-11-20 09:19:46.595771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 [2024-11-20 09:19:46.607711] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.221 [2024-11-20 09:19:46.607783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.221 [2024-11-20 09:19:46.607795] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.221 [2024-11-20 09:19:46.607812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 [2024-11-20 09:19:46.659306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.221 BaseBdev1 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.221 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.481 [ 00:07:21.481 { 00:07:21.481 "name": "BaseBdev1", 00:07:21.481 "aliases": [ 00:07:21.481 "008a49a0-348d-4715-9f7d-6248f9979e43" 00:07:21.481 ], 00:07:21.481 "product_name": "Malloc disk", 00:07:21.481 "block_size": 512, 00:07:21.481 "num_blocks": 65536, 00:07:21.481 "uuid": "008a49a0-348d-4715-9f7d-6248f9979e43", 00:07:21.481 "assigned_rate_limits": { 00:07:21.481 "rw_ios_per_sec": 0, 00:07:21.481 "rw_mbytes_per_sec": 0, 00:07:21.481 "r_mbytes_per_sec": 0, 00:07:21.481 "w_mbytes_per_sec": 0 00:07:21.481 }, 00:07:21.481 "claimed": true, 00:07:21.481 "claim_type": "exclusive_write", 00:07:21.481 "zoned": false, 00:07:21.481 "supported_io_types": { 00:07:21.481 "read": true, 00:07:21.481 "write": true, 00:07:21.481 "unmap": true, 00:07:21.481 "flush": true, 00:07:21.481 "reset": true, 00:07:21.481 "nvme_admin": false, 00:07:21.481 "nvme_io": false, 00:07:21.481 "nvme_io_md": false, 00:07:21.481 "write_zeroes": true, 00:07:21.481 "zcopy": true, 00:07:21.481 "get_zone_info": false, 00:07:21.481 "zone_management": false, 00:07:21.481 "zone_append": false, 00:07:21.481 "compare": false, 00:07:21.481 "compare_and_write": false, 00:07:21.481 "abort": true, 00:07:21.481 "seek_hole": false, 00:07:21.481 "seek_data": false, 00:07:21.481 "copy": true, 00:07:21.481 "nvme_iov_md": false 00:07:21.481 }, 00:07:21.481 "memory_domains": [ 00:07:21.481 { 00:07:21.481 "dma_device_id": "system", 00:07:21.481 "dma_device_type": 1 00:07:21.481 }, 00:07:21.481 { 00:07:21.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.481 "dma_device_type": 2 00:07:21.481 } 00:07:21.481 ], 00:07:21.481 "driver_specific": {} 00:07:21.481 } 00:07:21.481 ] 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.481 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.481 "name": "Existed_Raid", 00:07:21.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.481 "strip_size_kb": 64, 00:07:21.481 "state": "configuring", 00:07:21.481 "raid_level": "raid0", 00:07:21.481 "superblock": false, 00:07:21.481 "num_base_bdevs": 2, 00:07:21.481 "num_base_bdevs_discovered": 1, 00:07:21.481 "num_base_bdevs_operational": 2, 00:07:21.481 "base_bdevs_list": [ 00:07:21.481 { 00:07:21.481 "name": "BaseBdev1", 00:07:21.481 "uuid": "008a49a0-348d-4715-9f7d-6248f9979e43", 00:07:21.481 "is_configured": true, 00:07:21.481 "data_offset": 0, 00:07:21.482 "data_size": 65536 00:07:21.482 }, 00:07:21.482 { 00:07:21.482 "name": "BaseBdev2", 00:07:21.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.482 "is_configured": false, 00:07:21.482 "data_offset": 0, 00:07:21.482 "data_size": 0 00:07:21.482 } 00:07:21.482 ] 00:07:21.482 }' 00:07:21.482 09:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.482 09:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.742 [2024-11-20 09:19:47.122644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.742 [2024-11-20 09:19:47.122722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.742 [2024-11-20 09:19:47.134714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.742 [2024-11-20 09:19:47.136859] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.742 [2024-11-20 09:19:47.136927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.742 "name": "Existed_Raid", 00:07:21.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.742 "strip_size_kb": 64, 00:07:21.742 "state": "configuring", 00:07:21.742 "raid_level": "raid0", 00:07:21.742 "superblock": false, 00:07:21.742 "num_base_bdevs": 2, 00:07:21.742 "num_base_bdevs_discovered": 1, 00:07:21.742 "num_base_bdevs_operational": 2, 00:07:21.742 "base_bdevs_list": [ 00:07:21.742 { 00:07:21.742 "name": "BaseBdev1", 00:07:21.742 "uuid": "008a49a0-348d-4715-9f7d-6248f9979e43", 00:07:21.742 "is_configured": true, 00:07:21.742 "data_offset": 0, 00:07:21.742 "data_size": 65536 00:07:21.742 }, 00:07:21.742 { 00:07:21.742 "name": "BaseBdev2", 00:07:21.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.742 "is_configured": false, 00:07:21.742 "data_offset": 0, 00:07:21.742 "data_size": 0 00:07:21.742 } 00:07:21.742 ] 00:07:21.742 }' 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.742 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 [2024-11-20 09:19:47.634760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.313 [2024-11-20 09:19:47.634829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:22.313 [2024-11-20 09:19:47.634841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:22.313 [2024-11-20 09:19:47.635120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.313 [2024-11-20 09:19:47.635333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:22.313 [2024-11-20 09:19:47.635364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:22.313 [2024-11-20 09:19:47.635716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.313 BaseBdev2 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 [ 00:07:22.313 { 00:07:22.313 "name": "BaseBdev2", 00:07:22.313 "aliases": [ 00:07:22.313 "84633dd1-8dd6-429e-939a-6fd07c89f6fb" 00:07:22.313 ], 00:07:22.313 "product_name": "Malloc disk", 00:07:22.313 "block_size": 512, 00:07:22.313 "num_blocks": 65536, 00:07:22.313 "uuid": "84633dd1-8dd6-429e-939a-6fd07c89f6fb", 00:07:22.313 "assigned_rate_limits": { 00:07:22.313 "rw_ios_per_sec": 0, 00:07:22.313 "rw_mbytes_per_sec": 0, 00:07:22.313 "r_mbytes_per_sec": 0, 00:07:22.313 "w_mbytes_per_sec": 0 00:07:22.313 }, 00:07:22.313 "claimed": true, 00:07:22.313 "claim_type": "exclusive_write", 00:07:22.313 "zoned": false, 00:07:22.313 "supported_io_types": { 00:07:22.313 "read": true, 00:07:22.313 "write": true, 00:07:22.313 "unmap": true, 00:07:22.313 "flush": true, 00:07:22.313 "reset": true, 00:07:22.313 "nvme_admin": false, 00:07:22.313 "nvme_io": false, 00:07:22.313 "nvme_io_md": false, 00:07:22.313 "write_zeroes": true, 00:07:22.313 "zcopy": true, 00:07:22.313 "get_zone_info": false, 00:07:22.313 "zone_management": false, 00:07:22.313 "zone_append": false, 00:07:22.313 "compare": false, 00:07:22.313 "compare_and_write": false, 00:07:22.313 "abort": true, 00:07:22.313 "seek_hole": false, 00:07:22.313 "seek_data": false, 00:07:22.313 "copy": true, 00:07:22.313 "nvme_iov_md": false 00:07:22.313 }, 00:07:22.313 "memory_domains": [ 00:07:22.313 { 00:07:22.313 "dma_device_id": "system", 00:07:22.313 "dma_device_type": 1 00:07:22.313 }, 00:07:22.313 { 00:07:22.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.313 "dma_device_type": 2 00:07:22.313 } 00:07:22.313 ], 00:07:22.313 "driver_specific": {} 00:07:22.313 } 00:07:22.313 ] 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.313 "name": "Existed_Raid", 00:07:22.313 "uuid": "3d6b4787-4525-4a99-9022-a9b2a4fa81e4", 00:07:22.313 "strip_size_kb": 64, 00:07:22.313 "state": "online", 00:07:22.313 "raid_level": "raid0", 00:07:22.313 "superblock": false, 00:07:22.313 "num_base_bdevs": 2, 00:07:22.313 "num_base_bdevs_discovered": 2, 00:07:22.313 "num_base_bdevs_operational": 2, 00:07:22.313 "base_bdevs_list": [ 00:07:22.313 { 00:07:22.313 "name": "BaseBdev1", 00:07:22.313 "uuid": "008a49a0-348d-4715-9f7d-6248f9979e43", 00:07:22.313 "is_configured": true, 00:07:22.313 "data_offset": 0, 00:07:22.313 "data_size": 65536 00:07:22.313 }, 00:07:22.313 { 00:07:22.313 "name": "BaseBdev2", 00:07:22.313 "uuid": "84633dd1-8dd6-429e-939a-6fd07c89f6fb", 00:07:22.313 "is_configured": true, 00:07:22.313 "data_offset": 0, 00:07:22.313 "data_size": 65536 00:07:22.313 } 00:07:22.313 ] 00:07:22.313 }' 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.313 09:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:22.884 [2024-11-20 09:19:48.122342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:22.884 "name": "Existed_Raid", 00:07:22.884 "aliases": [ 00:07:22.884 "3d6b4787-4525-4a99-9022-a9b2a4fa81e4" 00:07:22.884 ], 00:07:22.884 "product_name": "Raid Volume", 00:07:22.884 "block_size": 512, 00:07:22.884 "num_blocks": 131072, 00:07:22.884 "uuid": "3d6b4787-4525-4a99-9022-a9b2a4fa81e4", 00:07:22.884 "assigned_rate_limits": { 00:07:22.884 "rw_ios_per_sec": 0, 00:07:22.884 "rw_mbytes_per_sec": 0, 00:07:22.884 "r_mbytes_per_sec": 0, 00:07:22.884 "w_mbytes_per_sec": 0 00:07:22.884 }, 00:07:22.884 "claimed": false, 00:07:22.884 "zoned": false, 00:07:22.884 "supported_io_types": { 00:07:22.884 "read": true, 00:07:22.884 "write": true, 00:07:22.884 "unmap": true, 00:07:22.884 "flush": true, 00:07:22.884 "reset": true, 00:07:22.884 "nvme_admin": false, 00:07:22.884 "nvme_io": false, 00:07:22.884 "nvme_io_md": false, 00:07:22.884 "write_zeroes": true, 00:07:22.884 "zcopy": false, 00:07:22.884 "get_zone_info": false, 00:07:22.884 "zone_management": false, 00:07:22.884 "zone_append": false, 00:07:22.884 "compare": false, 00:07:22.884 "compare_and_write": false, 00:07:22.884 "abort": false, 00:07:22.884 "seek_hole": false, 00:07:22.884 "seek_data": false, 00:07:22.884 "copy": false, 00:07:22.884 "nvme_iov_md": false 00:07:22.884 }, 00:07:22.884 "memory_domains": [ 00:07:22.884 { 00:07:22.884 "dma_device_id": "system", 00:07:22.884 "dma_device_type": 1 00:07:22.884 }, 00:07:22.884 { 00:07:22.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.884 "dma_device_type": 2 00:07:22.884 }, 00:07:22.884 { 00:07:22.884 "dma_device_id": "system", 00:07:22.884 "dma_device_type": 1 00:07:22.884 }, 00:07:22.884 { 00:07:22.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.884 "dma_device_type": 2 00:07:22.884 } 00:07:22.884 ], 00:07:22.884 "driver_specific": { 00:07:22.884 "raid": { 00:07:22.884 "uuid": "3d6b4787-4525-4a99-9022-a9b2a4fa81e4", 00:07:22.884 "strip_size_kb": 64, 00:07:22.884 "state": "online", 00:07:22.884 "raid_level": "raid0", 00:07:22.884 "superblock": false, 00:07:22.884 "num_base_bdevs": 2, 00:07:22.884 "num_base_bdevs_discovered": 2, 00:07:22.884 "num_base_bdevs_operational": 2, 00:07:22.884 "base_bdevs_list": [ 00:07:22.884 { 00:07:22.884 "name": "BaseBdev1", 00:07:22.884 "uuid": "008a49a0-348d-4715-9f7d-6248f9979e43", 00:07:22.884 "is_configured": true, 00:07:22.884 "data_offset": 0, 00:07:22.884 "data_size": 65536 00:07:22.884 }, 00:07:22.884 { 00:07:22.884 "name": "BaseBdev2", 00:07:22.884 "uuid": "84633dd1-8dd6-429e-939a-6fd07c89f6fb", 00:07:22.884 "is_configured": true, 00:07:22.884 "data_offset": 0, 00:07:22.884 "data_size": 65536 00:07:22.884 } 00:07:22.884 ] 00:07:22.884 } 00:07:22.884 } 00:07:22.884 }' 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:22.884 BaseBdev2' 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.884 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.144 [2024-11-20 09:19:48.353723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:23.144 [2024-11-20 09:19:48.353775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.144 [2024-11-20 09:19:48.353838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.144 "name": "Existed_Raid", 00:07:23.144 "uuid": "3d6b4787-4525-4a99-9022-a9b2a4fa81e4", 00:07:23.144 "strip_size_kb": 64, 00:07:23.144 "state": "offline", 00:07:23.144 "raid_level": "raid0", 00:07:23.144 "superblock": false, 00:07:23.144 "num_base_bdevs": 2, 00:07:23.144 "num_base_bdevs_discovered": 1, 00:07:23.144 "num_base_bdevs_operational": 1, 00:07:23.144 "base_bdevs_list": [ 00:07:23.144 { 00:07:23.144 "name": null, 00:07:23.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.144 "is_configured": false, 00:07:23.144 "data_offset": 0, 00:07:23.144 "data_size": 65536 00:07:23.144 }, 00:07:23.144 { 00:07:23.144 "name": "BaseBdev2", 00:07:23.144 "uuid": "84633dd1-8dd6-429e-939a-6fd07c89f6fb", 00:07:23.144 "is_configured": true, 00:07:23.144 "data_offset": 0, 00:07:23.144 "data_size": 65536 00:07:23.144 } 00:07:23.144 ] 00:07:23.144 }' 00:07:23.144 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.145 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.714 09:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.714 [2024-11-20 09:19:48.936221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:23.714 [2024-11-20 09:19:48.936294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60870 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60870 ']' 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60870 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60870 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.715 killing process with pid 60870 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60870' 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60870 00:07:23.715 [2024-11-20 09:19:49.124880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.715 09:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60870 00:07:23.715 [2024-11-20 09:19:49.143681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:25.130 00:07:25.130 real 0m5.163s 00:07:25.130 user 0m7.401s 00:07:25.130 sys 0m0.845s 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.130 ************************************ 00:07:25.130 END TEST raid_state_function_test 00:07:25.130 ************************************ 00:07:25.130 09:19:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:25.130 09:19:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:25.130 09:19:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.130 09:19:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.130 ************************************ 00:07:25.130 START TEST raid_state_function_test_sb 00:07:25.130 ************************************ 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61123 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.130 Process raid pid: 61123 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61123' 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61123 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61123 ']' 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.130 09:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.130 [2024-11-20 09:19:50.488520] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:25.130 [2024-11-20 09:19:50.488639] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.390 [2024-11-20 09:19:50.648188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.390 [2024-11-20 09:19:50.779792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.650 [2024-11-20 09:19:51.010002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.650 [2024-11-20 09:19:51.010074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.218 [2024-11-20 09:19:51.380222] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.218 [2024-11-20 09:19:51.380292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.218 [2024-11-20 09:19:51.380307] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.218 [2024-11-20 09:19:51.380321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.218 "name": "Existed_Raid", 00:07:26.218 "uuid": "cc4bf1f5-8737-4a5a-9328-3a4a9603feb2", 00:07:26.218 "strip_size_kb": 64, 00:07:26.218 "state": "configuring", 00:07:26.218 "raid_level": "raid0", 00:07:26.218 "superblock": true, 00:07:26.218 "num_base_bdevs": 2, 00:07:26.218 "num_base_bdevs_discovered": 0, 00:07:26.218 "num_base_bdevs_operational": 2, 00:07:26.218 "base_bdevs_list": [ 00:07:26.218 { 00:07:26.218 "name": "BaseBdev1", 00:07:26.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.218 "is_configured": false, 00:07:26.218 "data_offset": 0, 00:07:26.218 "data_size": 0 00:07:26.218 }, 00:07:26.218 { 00:07:26.218 "name": "BaseBdev2", 00:07:26.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.218 "is_configured": false, 00:07:26.218 "data_offset": 0, 00:07:26.218 "data_size": 0 00:07:26.218 } 00:07:26.218 ] 00:07:26.218 }' 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.218 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.478 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.478 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.478 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.478 [2024-11-20 09:19:51.843320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.479 [2024-11-20 09:19:51.843371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.479 [2024-11-20 09:19:51.855306] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.479 [2024-11-20 09:19:51.855357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.479 [2024-11-20 09:19:51.855368] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.479 [2024-11-20 09:19:51.855384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.479 [2024-11-20 09:19:51.905166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.479 BaseBdev1 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.479 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.479 [ 00:07:26.479 { 00:07:26.479 "name": "BaseBdev1", 00:07:26.479 "aliases": [ 00:07:26.479 "49219b63-b352-4ec0-a0b8-09ac38ef2c6a" 00:07:26.479 ], 00:07:26.479 "product_name": "Malloc disk", 00:07:26.479 "block_size": 512, 00:07:26.479 "num_blocks": 65536, 00:07:26.479 "uuid": "49219b63-b352-4ec0-a0b8-09ac38ef2c6a", 00:07:26.479 "assigned_rate_limits": { 00:07:26.479 "rw_ios_per_sec": 0, 00:07:26.479 "rw_mbytes_per_sec": 0, 00:07:26.479 "r_mbytes_per_sec": 0, 00:07:26.479 "w_mbytes_per_sec": 0 00:07:26.479 }, 00:07:26.479 "claimed": true, 00:07:26.479 "claim_type": "exclusive_write", 00:07:26.479 "zoned": false, 00:07:26.479 "supported_io_types": { 00:07:26.479 "read": true, 00:07:26.479 "write": true, 00:07:26.479 "unmap": true, 00:07:26.479 "flush": true, 00:07:26.479 "reset": true, 00:07:26.479 "nvme_admin": false, 00:07:26.479 "nvme_io": false, 00:07:26.479 "nvme_io_md": false, 00:07:26.739 "write_zeroes": true, 00:07:26.739 "zcopy": true, 00:07:26.739 "get_zone_info": false, 00:07:26.739 "zone_management": false, 00:07:26.739 "zone_append": false, 00:07:26.739 "compare": false, 00:07:26.739 "compare_and_write": false, 00:07:26.739 "abort": true, 00:07:26.739 "seek_hole": false, 00:07:26.739 "seek_data": false, 00:07:26.739 "copy": true, 00:07:26.739 "nvme_iov_md": false 00:07:26.739 }, 00:07:26.739 "memory_domains": [ 00:07:26.739 { 00:07:26.739 "dma_device_id": "system", 00:07:26.739 "dma_device_type": 1 00:07:26.739 }, 00:07:26.739 { 00:07:26.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.739 "dma_device_type": 2 00:07:26.739 } 00:07:26.739 ], 00:07:26.739 "driver_specific": {} 00:07:26.739 } 00:07:26.739 ] 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.739 "name": "Existed_Raid", 00:07:26.739 "uuid": "e134def1-b771-427f-8830-54b28c38b899", 00:07:26.739 "strip_size_kb": 64, 00:07:26.739 "state": "configuring", 00:07:26.739 "raid_level": "raid0", 00:07:26.739 "superblock": true, 00:07:26.739 "num_base_bdevs": 2, 00:07:26.739 "num_base_bdevs_discovered": 1, 00:07:26.739 "num_base_bdevs_operational": 2, 00:07:26.739 "base_bdevs_list": [ 00:07:26.739 { 00:07:26.739 "name": "BaseBdev1", 00:07:26.739 "uuid": "49219b63-b352-4ec0-a0b8-09ac38ef2c6a", 00:07:26.739 "is_configured": true, 00:07:26.739 "data_offset": 2048, 00:07:26.739 "data_size": 63488 00:07:26.739 }, 00:07:26.739 { 00:07:26.739 "name": "BaseBdev2", 00:07:26.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.739 "is_configured": false, 00:07:26.739 "data_offset": 0, 00:07:26.739 "data_size": 0 00:07:26.739 } 00:07:26.739 ] 00:07:26.739 }' 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.739 09:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.000 [2024-11-20 09:19:52.404449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.000 [2024-11-20 09:19:52.404597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.000 [2024-11-20 09:19:52.416511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.000 [2024-11-20 09:19:52.418719] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.000 [2024-11-20 09:19:52.418825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.000 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.259 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.259 "name": "Existed_Raid", 00:07:27.259 "uuid": "231b89bf-716d-4282-a449-849e1ba2fe1f", 00:07:27.259 "strip_size_kb": 64, 00:07:27.259 "state": "configuring", 00:07:27.259 "raid_level": "raid0", 00:07:27.259 "superblock": true, 00:07:27.259 "num_base_bdevs": 2, 00:07:27.259 "num_base_bdevs_discovered": 1, 00:07:27.259 "num_base_bdevs_operational": 2, 00:07:27.259 "base_bdevs_list": [ 00:07:27.259 { 00:07:27.259 "name": "BaseBdev1", 00:07:27.259 "uuid": "49219b63-b352-4ec0-a0b8-09ac38ef2c6a", 00:07:27.259 "is_configured": true, 00:07:27.259 "data_offset": 2048, 00:07:27.259 "data_size": 63488 00:07:27.259 }, 00:07:27.259 { 00:07:27.259 "name": "BaseBdev2", 00:07:27.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.259 "is_configured": false, 00:07:27.259 "data_offset": 0, 00:07:27.259 "data_size": 0 00:07:27.259 } 00:07:27.259 ] 00:07:27.259 }' 00:07:27.259 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.259 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.520 [2024-11-20 09:19:52.906686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:27.520 [2024-11-20 09:19:52.906997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:27.520 [2024-11-20 09:19:52.907016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:27.520 [2024-11-20 09:19:52.907318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.520 [2024-11-20 09:19:52.907513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:27.520 [2024-11-20 09:19:52.907531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:27.520 BaseBdev2 00:07:27.520 [2024-11-20 09:19:52.907714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.520 [ 00:07:27.520 { 00:07:27.520 "name": "BaseBdev2", 00:07:27.520 "aliases": [ 00:07:27.520 "9530a5f3-9ecf-472c-97e1-c9fba95d5324" 00:07:27.520 ], 00:07:27.520 "product_name": "Malloc disk", 00:07:27.520 "block_size": 512, 00:07:27.520 "num_blocks": 65536, 00:07:27.520 "uuid": "9530a5f3-9ecf-472c-97e1-c9fba95d5324", 00:07:27.520 "assigned_rate_limits": { 00:07:27.520 "rw_ios_per_sec": 0, 00:07:27.520 "rw_mbytes_per_sec": 0, 00:07:27.520 "r_mbytes_per_sec": 0, 00:07:27.520 "w_mbytes_per_sec": 0 00:07:27.520 }, 00:07:27.520 "claimed": true, 00:07:27.520 "claim_type": "exclusive_write", 00:07:27.520 "zoned": false, 00:07:27.520 "supported_io_types": { 00:07:27.520 "read": true, 00:07:27.520 "write": true, 00:07:27.520 "unmap": true, 00:07:27.520 "flush": true, 00:07:27.520 "reset": true, 00:07:27.520 "nvme_admin": false, 00:07:27.520 "nvme_io": false, 00:07:27.520 "nvme_io_md": false, 00:07:27.520 "write_zeroes": true, 00:07:27.520 "zcopy": true, 00:07:27.520 "get_zone_info": false, 00:07:27.520 "zone_management": false, 00:07:27.520 "zone_append": false, 00:07:27.520 "compare": false, 00:07:27.520 "compare_and_write": false, 00:07:27.520 "abort": true, 00:07:27.520 "seek_hole": false, 00:07:27.520 "seek_data": false, 00:07:27.520 "copy": true, 00:07:27.520 "nvme_iov_md": false 00:07:27.520 }, 00:07:27.520 "memory_domains": [ 00:07:27.520 { 00:07:27.520 "dma_device_id": "system", 00:07:27.520 "dma_device_type": 1 00:07:27.520 }, 00:07:27.520 { 00:07:27.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.520 "dma_device_type": 2 00:07:27.520 } 00:07:27.520 ], 00:07:27.520 "driver_specific": {} 00:07:27.520 } 00:07:27.520 ] 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.520 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.779 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.779 "name": "Existed_Raid", 00:07:27.779 "uuid": "231b89bf-716d-4282-a449-849e1ba2fe1f", 00:07:27.779 "strip_size_kb": 64, 00:07:27.779 "state": "online", 00:07:27.779 "raid_level": "raid0", 00:07:27.779 "superblock": true, 00:07:27.779 "num_base_bdevs": 2, 00:07:27.779 "num_base_bdevs_discovered": 2, 00:07:27.779 "num_base_bdevs_operational": 2, 00:07:27.779 "base_bdevs_list": [ 00:07:27.779 { 00:07:27.779 "name": "BaseBdev1", 00:07:27.779 "uuid": "49219b63-b352-4ec0-a0b8-09ac38ef2c6a", 00:07:27.779 "is_configured": true, 00:07:27.779 "data_offset": 2048, 00:07:27.779 "data_size": 63488 00:07:27.779 }, 00:07:27.779 { 00:07:27.779 "name": "BaseBdev2", 00:07:27.779 "uuid": "9530a5f3-9ecf-472c-97e1-c9fba95d5324", 00:07:27.779 "is_configured": true, 00:07:27.779 "data_offset": 2048, 00:07:27.779 "data_size": 63488 00:07:27.779 } 00:07:27.779 ] 00:07:27.779 }' 00:07:27.779 09:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.779 09:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.039 [2024-11-20 09:19:53.426231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.039 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.039 "name": "Existed_Raid", 00:07:28.039 "aliases": [ 00:07:28.039 "231b89bf-716d-4282-a449-849e1ba2fe1f" 00:07:28.039 ], 00:07:28.039 "product_name": "Raid Volume", 00:07:28.039 "block_size": 512, 00:07:28.039 "num_blocks": 126976, 00:07:28.039 "uuid": "231b89bf-716d-4282-a449-849e1ba2fe1f", 00:07:28.039 "assigned_rate_limits": { 00:07:28.039 "rw_ios_per_sec": 0, 00:07:28.039 "rw_mbytes_per_sec": 0, 00:07:28.039 "r_mbytes_per_sec": 0, 00:07:28.039 "w_mbytes_per_sec": 0 00:07:28.039 }, 00:07:28.039 "claimed": false, 00:07:28.039 "zoned": false, 00:07:28.039 "supported_io_types": { 00:07:28.039 "read": true, 00:07:28.039 "write": true, 00:07:28.039 "unmap": true, 00:07:28.039 "flush": true, 00:07:28.039 "reset": true, 00:07:28.039 "nvme_admin": false, 00:07:28.039 "nvme_io": false, 00:07:28.039 "nvme_io_md": false, 00:07:28.039 "write_zeroes": true, 00:07:28.039 "zcopy": false, 00:07:28.039 "get_zone_info": false, 00:07:28.039 "zone_management": false, 00:07:28.039 "zone_append": false, 00:07:28.039 "compare": false, 00:07:28.039 "compare_and_write": false, 00:07:28.039 "abort": false, 00:07:28.039 "seek_hole": false, 00:07:28.039 "seek_data": false, 00:07:28.039 "copy": false, 00:07:28.039 "nvme_iov_md": false 00:07:28.039 }, 00:07:28.039 "memory_domains": [ 00:07:28.039 { 00:07:28.039 "dma_device_id": "system", 00:07:28.039 "dma_device_type": 1 00:07:28.039 }, 00:07:28.039 { 00:07:28.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.039 "dma_device_type": 2 00:07:28.039 }, 00:07:28.039 { 00:07:28.039 "dma_device_id": "system", 00:07:28.039 "dma_device_type": 1 00:07:28.039 }, 00:07:28.039 { 00:07:28.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.039 "dma_device_type": 2 00:07:28.039 } 00:07:28.039 ], 00:07:28.039 "driver_specific": { 00:07:28.039 "raid": { 00:07:28.040 "uuid": "231b89bf-716d-4282-a449-849e1ba2fe1f", 00:07:28.040 "strip_size_kb": 64, 00:07:28.040 "state": "online", 00:07:28.040 "raid_level": "raid0", 00:07:28.040 "superblock": true, 00:07:28.040 "num_base_bdevs": 2, 00:07:28.040 "num_base_bdevs_discovered": 2, 00:07:28.040 "num_base_bdevs_operational": 2, 00:07:28.040 "base_bdevs_list": [ 00:07:28.040 { 00:07:28.040 "name": "BaseBdev1", 00:07:28.040 "uuid": "49219b63-b352-4ec0-a0b8-09ac38ef2c6a", 00:07:28.040 "is_configured": true, 00:07:28.040 "data_offset": 2048, 00:07:28.040 "data_size": 63488 00:07:28.040 }, 00:07:28.040 { 00:07:28.040 "name": "BaseBdev2", 00:07:28.040 "uuid": "9530a5f3-9ecf-472c-97e1-c9fba95d5324", 00:07:28.040 "is_configured": true, 00:07:28.040 "data_offset": 2048, 00:07:28.040 "data_size": 63488 00:07:28.040 } 00:07:28.040 ] 00:07:28.040 } 00:07:28.040 } 00:07:28.040 }' 00:07:28.040 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:28.298 BaseBdev2' 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.298 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.299 [2024-11-20 09:19:53.665668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:28.299 [2024-11-20 09:19:53.665723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.299 [2024-11-20 09:19:53.665787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.556 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.557 "name": "Existed_Raid", 00:07:28.557 "uuid": "231b89bf-716d-4282-a449-849e1ba2fe1f", 00:07:28.557 "strip_size_kb": 64, 00:07:28.557 "state": "offline", 00:07:28.557 "raid_level": "raid0", 00:07:28.557 "superblock": true, 00:07:28.557 "num_base_bdevs": 2, 00:07:28.557 "num_base_bdevs_discovered": 1, 00:07:28.557 "num_base_bdevs_operational": 1, 00:07:28.557 "base_bdevs_list": [ 00:07:28.557 { 00:07:28.557 "name": null, 00:07:28.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.557 "is_configured": false, 00:07:28.557 "data_offset": 0, 00:07:28.557 "data_size": 63488 00:07:28.557 }, 00:07:28.557 { 00:07:28.557 "name": "BaseBdev2", 00:07:28.557 "uuid": "9530a5f3-9ecf-472c-97e1-c9fba95d5324", 00:07:28.557 "is_configured": true, 00:07:28.557 "data_offset": 2048, 00:07:28.557 "data_size": 63488 00:07:28.557 } 00:07:28.557 ] 00:07:28.557 }' 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.557 09:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.825 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:28.825 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.825 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:28.825 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.825 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.825 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.825 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.126 [2024-11-20 09:19:54.295963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:29.126 [2024-11-20 09:19:54.296110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61123 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61123 ']' 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61123 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61123 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61123' 00:07:29.126 killing process with pid 61123 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61123 00:07:29.126 [2024-11-20 09:19:54.502742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.126 09:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61123 00:07:29.126 [2024-11-20 09:19:54.523818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.501 ************************************ 00:07:30.501 END TEST raid_state_function_test_sb 00:07:30.501 ************************************ 00:07:30.501 09:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:30.501 00:07:30.501 real 0m5.425s 00:07:30.501 user 0m7.748s 00:07:30.501 sys 0m0.897s 00:07:30.501 09:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.501 09:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.501 09:19:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:30.501 09:19:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:30.501 09:19:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.501 09:19:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.501 ************************************ 00:07:30.501 START TEST raid_superblock_test 00:07:30.501 ************************************ 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61375 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61375 00:07:30.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61375 ']' 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.501 09:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.760 [2024-11-20 09:19:55.994807] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:30.760 [2024-11-20 09:19:55.995099] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61375 ] 00:07:30.760 [2024-11-20 09:19:56.179566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.019 [2024-11-20 09:19:56.313739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.279 [2024-11-20 09:19:56.538604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.279 [2024-11-20 09:19:56.538764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.538 malloc1 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.538 [2024-11-20 09:19:56.944639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:31.538 [2024-11-20 09:19:56.944836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.538 [2024-11-20 09:19:56.944906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:31.538 [2024-11-20 09:19:56.944952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.538 [2024-11-20 09:19:56.947761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.538 [2024-11-20 09:19:56.947898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:31.538 pt1 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.538 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.539 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.539 09:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:31.539 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.539 09:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.798 malloc2 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.798 [2024-11-20 09:19:57.008202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.798 [2024-11-20 09:19:57.008370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.798 [2024-11-20 09:19:57.008444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:31.798 [2024-11-20 09:19:57.008495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.798 [2024-11-20 09:19:57.011140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.798 [2024-11-20 09:19:57.011212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.798 pt2 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.798 [2024-11-20 09:19:57.020325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:31.798 [2024-11-20 09:19:57.022619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.798 [2024-11-20 09:19:57.022899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:31.798 [2024-11-20 09:19:57.022963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.798 [2024-11-20 09:19:57.023332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:31.798 [2024-11-20 09:19:57.023596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:31.798 [2024-11-20 09:19:57.023655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:31.798 [2024-11-20 09:19:57.023929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.798 "name": "raid_bdev1", 00:07:31.798 "uuid": "8d22905c-0851-4d71-b1e1-797a648ee57d", 00:07:31.798 "strip_size_kb": 64, 00:07:31.798 "state": "online", 00:07:31.798 "raid_level": "raid0", 00:07:31.798 "superblock": true, 00:07:31.798 "num_base_bdevs": 2, 00:07:31.798 "num_base_bdevs_discovered": 2, 00:07:31.798 "num_base_bdevs_operational": 2, 00:07:31.798 "base_bdevs_list": [ 00:07:31.798 { 00:07:31.798 "name": "pt1", 00:07:31.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.798 "is_configured": true, 00:07:31.798 "data_offset": 2048, 00:07:31.798 "data_size": 63488 00:07:31.798 }, 00:07:31.798 { 00:07:31.798 "name": "pt2", 00:07:31.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.798 "is_configured": true, 00:07:31.798 "data_offset": 2048, 00:07:31.798 "data_size": 63488 00:07:31.798 } 00:07:31.798 ] 00:07:31.798 }' 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.798 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.057 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.315 [2024-11-20 09:19:57.513911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.315 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.315 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.315 "name": "raid_bdev1", 00:07:32.315 "aliases": [ 00:07:32.315 "8d22905c-0851-4d71-b1e1-797a648ee57d" 00:07:32.315 ], 00:07:32.315 "product_name": "Raid Volume", 00:07:32.315 "block_size": 512, 00:07:32.315 "num_blocks": 126976, 00:07:32.315 "uuid": "8d22905c-0851-4d71-b1e1-797a648ee57d", 00:07:32.315 "assigned_rate_limits": { 00:07:32.315 "rw_ios_per_sec": 0, 00:07:32.315 "rw_mbytes_per_sec": 0, 00:07:32.315 "r_mbytes_per_sec": 0, 00:07:32.315 "w_mbytes_per_sec": 0 00:07:32.315 }, 00:07:32.315 "claimed": false, 00:07:32.315 "zoned": false, 00:07:32.315 "supported_io_types": { 00:07:32.315 "read": true, 00:07:32.315 "write": true, 00:07:32.315 "unmap": true, 00:07:32.315 "flush": true, 00:07:32.315 "reset": true, 00:07:32.315 "nvme_admin": false, 00:07:32.315 "nvme_io": false, 00:07:32.315 "nvme_io_md": false, 00:07:32.315 "write_zeroes": true, 00:07:32.315 "zcopy": false, 00:07:32.315 "get_zone_info": false, 00:07:32.315 "zone_management": false, 00:07:32.315 "zone_append": false, 00:07:32.315 "compare": false, 00:07:32.315 "compare_and_write": false, 00:07:32.315 "abort": false, 00:07:32.315 "seek_hole": false, 00:07:32.315 "seek_data": false, 00:07:32.315 "copy": false, 00:07:32.315 "nvme_iov_md": false 00:07:32.315 }, 00:07:32.315 "memory_domains": [ 00:07:32.315 { 00:07:32.315 "dma_device_id": "system", 00:07:32.315 "dma_device_type": 1 00:07:32.315 }, 00:07:32.315 { 00:07:32.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.315 "dma_device_type": 2 00:07:32.315 }, 00:07:32.315 { 00:07:32.315 "dma_device_id": "system", 00:07:32.315 "dma_device_type": 1 00:07:32.315 }, 00:07:32.315 { 00:07:32.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.315 "dma_device_type": 2 00:07:32.315 } 00:07:32.315 ], 00:07:32.315 "driver_specific": { 00:07:32.315 "raid": { 00:07:32.315 "uuid": "8d22905c-0851-4d71-b1e1-797a648ee57d", 00:07:32.315 "strip_size_kb": 64, 00:07:32.315 "state": "online", 00:07:32.315 "raid_level": "raid0", 00:07:32.315 "superblock": true, 00:07:32.315 "num_base_bdevs": 2, 00:07:32.315 "num_base_bdevs_discovered": 2, 00:07:32.315 "num_base_bdevs_operational": 2, 00:07:32.315 "base_bdevs_list": [ 00:07:32.315 { 00:07:32.315 "name": "pt1", 00:07:32.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.315 "is_configured": true, 00:07:32.315 "data_offset": 2048, 00:07:32.315 "data_size": 63488 00:07:32.315 }, 00:07:32.315 { 00:07:32.316 "name": "pt2", 00:07:32.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.316 "is_configured": true, 00:07:32.316 "data_offset": 2048, 00:07:32.316 "data_size": 63488 00:07:32.316 } 00:07:32.316 ] 00:07:32.316 } 00:07:32.316 } 00:07:32.316 }' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:32.316 pt2' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:32.316 [2024-11-20 09:19:57.714477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8d22905c-0851-4d71-b1e1-797a648ee57d 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8d22905c-0851-4d71-b1e1-797a648ee57d ']' 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.316 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.316 [2024-11-20 09:19:57.766259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.316 [2024-11-20 09:19:57.766308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.316 [2024-11-20 09:19:57.766421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.316 [2024-11-20 09:19:57.766496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.316 [2024-11-20 09:19:57.766516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 [2024-11-20 09:19:57.890672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:32.575 [2024-11-20 09:19:57.892982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:32.575 [2024-11-20 09:19:57.893071] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:32.575 [2024-11-20 09:19:57.893153] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:32.575 [2024-11-20 09:19:57.893174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.575 [2024-11-20 09:19:57.893194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:32.575 request: 00:07:32.575 { 00:07:32.575 "name": "raid_bdev1", 00:07:32.575 "raid_level": "raid0", 00:07:32.575 "base_bdevs": [ 00:07:32.575 "malloc1", 00:07:32.575 "malloc2" 00:07:32.575 ], 00:07:32.575 "strip_size_kb": 64, 00:07:32.575 "superblock": false, 00:07:32.575 "method": "bdev_raid_create", 00:07:32.575 "req_id": 1 00:07:32.575 } 00:07:32.575 Got JSON-RPC error response 00:07:32.575 response: 00:07:32.575 { 00:07:32.575 "code": -17, 00:07:32.575 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:32.575 } 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.575 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 [2024-11-20 09:19:57.954776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.576 [2024-11-20 09:19:57.954956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.576 [2024-11-20 09:19:57.955017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:32.576 [2024-11-20 09:19:57.955069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.576 [2024-11-20 09:19:57.957689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.576 [2024-11-20 09:19:57.957804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.576 [2024-11-20 09:19:57.957986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:32.576 [2024-11-20 09:19:57.958112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:32.576 pt1 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.576 "name": "raid_bdev1", 00:07:32.576 "uuid": "8d22905c-0851-4d71-b1e1-797a648ee57d", 00:07:32.576 "strip_size_kb": 64, 00:07:32.576 "state": "configuring", 00:07:32.576 "raid_level": "raid0", 00:07:32.576 "superblock": true, 00:07:32.576 "num_base_bdevs": 2, 00:07:32.576 "num_base_bdevs_discovered": 1, 00:07:32.576 "num_base_bdevs_operational": 2, 00:07:32.576 "base_bdevs_list": [ 00:07:32.576 { 00:07:32.576 "name": "pt1", 00:07:32.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.576 "is_configured": true, 00:07:32.576 "data_offset": 2048, 00:07:32.576 "data_size": 63488 00:07:32.576 }, 00:07:32.576 { 00:07:32.576 "name": null, 00:07:32.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.576 "is_configured": false, 00:07:32.576 "data_offset": 2048, 00:07:32.576 "data_size": 63488 00:07:32.576 } 00:07:32.576 ] 00:07:32.576 }' 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.576 09:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.144 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:33.144 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:33.144 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.144 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:33.144 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.144 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.144 [2024-11-20 09:19:58.391962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:33.144 [2024-11-20 09:19:58.392061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.144 [2024-11-20 09:19:58.392090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:33.144 [2024-11-20 09:19:58.392115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.144 [2024-11-20 09:19:58.392709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.144 [2024-11-20 09:19:58.392746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:33.144 [2024-11-20 09:19:58.392879] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:33.144 [2024-11-20 09:19:58.392922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:33.144 [2024-11-20 09:19:58.393062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.144 [2024-11-20 09:19:58.393078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.144 [2024-11-20 09:19:58.393374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:33.145 [2024-11-20 09:19:58.393682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.145 [2024-11-20 09:19:58.393737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:33.145 [2024-11-20 09:19:58.394010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.145 pt2 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.145 "name": "raid_bdev1", 00:07:33.145 "uuid": "8d22905c-0851-4d71-b1e1-797a648ee57d", 00:07:33.145 "strip_size_kb": 64, 00:07:33.145 "state": "online", 00:07:33.145 "raid_level": "raid0", 00:07:33.145 "superblock": true, 00:07:33.145 "num_base_bdevs": 2, 00:07:33.145 "num_base_bdevs_discovered": 2, 00:07:33.145 "num_base_bdevs_operational": 2, 00:07:33.145 "base_bdevs_list": [ 00:07:33.145 { 00:07:33.145 "name": "pt1", 00:07:33.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.145 "is_configured": true, 00:07:33.145 "data_offset": 2048, 00:07:33.145 "data_size": 63488 00:07:33.145 }, 00:07:33.145 { 00:07:33.145 "name": "pt2", 00:07:33.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.145 "is_configured": true, 00:07:33.145 "data_offset": 2048, 00:07:33.145 "data_size": 63488 00:07:33.145 } 00:07:33.145 ] 00:07:33.145 }' 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.145 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.712 [2024-11-20 09:19:58.881590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.712 "name": "raid_bdev1", 00:07:33.712 "aliases": [ 00:07:33.712 "8d22905c-0851-4d71-b1e1-797a648ee57d" 00:07:33.712 ], 00:07:33.712 "product_name": "Raid Volume", 00:07:33.712 "block_size": 512, 00:07:33.712 "num_blocks": 126976, 00:07:33.712 "uuid": "8d22905c-0851-4d71-b1e1-797a648ee57d", 00:07:33.712 "assigned_rate_limits": { 00:07:33.712 "rw_ios_per_sec": 0, 00:07:33.712 "rw_mbytes_per_sec": 0, 00:07:33.712 "r_mbytes_per_sec": 0, 00:07:33.712 "w_mbytes_per_sec": 0 00:07:33.712 }, 00:07:33.712 "claimed": false, 00:07:33.712 "zoned": false, 00:07:33.712 "supported_io_types": { 00:07:33.712 "read": true, 00:07:33.712 "write": true, 00:07:33.712 "unmap": true, 00:07:33.712 "flush": true, 00:07:33.712 "reset": true, 00:07:33.712 "nvme_admin": false, 00:07:33.712 "nvme_io": false, 00:07:33.712 "nvme_io_md": false, 00:07:33.712 "write_zeroes": true, 00:07:33.712 "zcopy": false, 00:07:33.712 "get_zone_info": false, 00:07:33.712 "zone_management": false, 00:07:33.712 "zone_append": false, 00:07:33.712 "compare": false, 00:07:33.712 "compare_and_write": false, 00:07:33.712 "abort": false, 00:07:33.712 "seek_hole": false, 00:07:33.712 "seek_data": false, 00:07:33.712 "copy": false, 00:07:33.712 "nvme_iov_md": false 00:07:33.712 }, 00:07:33.712 "memory_domains": [ 00:07:33.712 { 00:07:33.712 "dma_device_id": "system", 00:07:33.712 "dma_device_type": 1 00:07:33.712 }, 00:07:33.712 { 00:07:33.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.712 "dma_device_type": 2 00:07:33.712 }, 00:07:33.712 { 00:07:33.712 "dma_device_id": "system", 00:07:33.712 "dma_device_type": 1 00:07:33.712 }, 00:07:33.712 { 00:07:33.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.712 "dma_device_type": 2 00:07:33.712 } 00:07:33.712 ], 00:07:33.712 "driver_specific": { 00:07:33.712 "raid": { 00:07:33.712 "uuid": "8d22905c-0851-4d71-b1e1-797a648ee57d", 00:07:33.712 "strip_size_kb": 64, 00:07:33.712 "state": "online", 00:07:33.712 "raid_level": "raid0", 00:07:33.712 "superblock": true, 00:07:33.712 "num_base_bdevs": 2, 00:07:33.712 "num_base_bdevs_discovered": 2, 00:07:33.712 "num_base_bdevs_operational": 2, 00:07:33.712 "base_bdevs_list": [ 00:07:33.712 { 00:07:33.712 "name": "pt1", 00:07:33.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.712 "is_configured": true, 00:07:33.712 "data_offset": 2048, 00:07:33.712 "data_size": 63488 00:07:33.712 }, 00:07:33.712 { 00:07:33.712 "name": "pt2", 00:07:33.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.712 "is_configured": true, 00:07:33.712 "data_offset": 2048, 00:07:33.712 "data_size": 63488 00:07:33.712 } 00:07:33.712 ] 00:07:33.712 } 00:07:33.712 } 00:07:33.712 }' 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:33.712 pt2' 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.712 09:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:33.712 [2024-11-20 09:19:59.102203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8d22905c-0851-4d71-b1e1-797a648ee57d '!=' 8d22905c-0851-4d71-b1e1-797a648ee57d ']' 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61375 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61375 ']' 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61375 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.712 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61375 00:07:33.969 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.969 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.969 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61375' 00:07:33.969 killing process with pid 61375 00:07:33.969 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61375 00:07:33.969 [2024-11-20 09:19:59.178188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.969 09:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61375 00:07:33.969 [2024-11-20 09:19:59.178426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.969 [2024-11-20 09:19:59.178512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.969 [2024-11-20 09:19:59.178529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:33.969 [2024-11-20 09:19:59.417274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.342 09:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:35.343 ************************************ 00:07:35.343 END TEST raid_superblock_test 00:07:35.343 ************************************ 00:07:35.343 00:07:35.343 real 0m4.797s 00:07:35.343 user 0m6.661s 00:07:35.343 sys 0m0.732s 00:07:35.343 09:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.343 09:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.343 09:20:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:35.343 09:20:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.343 09:20:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.343 09:20:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.343 ************************************ 00:07:35.343 START TEST raid_read_error_test 00:07:35.343 ************************************ 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.i9Uo6nqocv 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61592 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61592 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61592 ']' 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.343 09:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.601 [2024-11-20 09:20:00.864376] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:35.601 [2024-11-20 09:20:00.865134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61592 ] 00:07:35.860 [2024-11-20 09:20:01.067140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.860 [2024-11-20 09:20:01.209202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.119 [2024-11-20 09:20:01.445984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.119 [2024-11-20 09:20:01.446159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.378 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.378 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:36.378 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:36.378 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:36.378 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.378 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.378 BaseBdev1_malloc 00:07:36.378 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.636 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:36.636 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.636 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.636 true 00:07:36.636 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.636 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:36.636 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.636 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.636 [2024-11-20 09:20:01.850163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:36.637 [2024-11-20 09:20:01.850321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.637 [2024-11-20 09:20:01.850374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:36.637 [2024-11-20 09:20:01.850426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.637 [2024-11-20 09:20:01.853090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.637 [2024-11-20 09:20:01.853199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:36.637 BaseBdev1 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.637 BaseBdev2_malloc 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.637 true 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.637 [2024-11-20 09:20:01.923249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:36.637 [2024-11-20 09:20:01.923342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.637 [2024-11-20 09:20:01.923369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:36.637 [2024-11-20 09:20:01.923382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.637 [2024-11-20 09:20:01.925958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.637 [2024-11-20 09:20:01.926095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:36.637 BaseBdev2 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.637 [2024-11-20 09:20:01.935333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.637 [2024-11-20 09:20:01.937600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:36.637 [2024-11-20 09:20:01.937853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:36.637 [2024-11-20 09:20:01.937874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.637 [2024-11-20 09:20:01.938180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:36.637 [2024-11-20 09:20:01.938383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:36.637 [2024-11-20 09:20:01.938398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:36.637 [2024-11-20 09:20:01.938638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.637 "name": "raid_bdev1", 00:07:36.637 "uuid": "d365e922-b34a-4ba3-a31f-e2ace416f878", 00:07:36.637 "strip_size_kb": 64, 00:07:36.637 "state": "online", 00:07:36.637 "raid_level": "raid0", 00:07:36.637 "superblock": true, 00:07:36.637 "num_base_bdevs": 2, 00:07:36.637 "num_base_bdevs_discovered": 2, 00:07:36.637 "num_base_bdevs_operational": 2, 00:07:36.637 "base_bdevs_list": [ 00:07:36.637 { 00:07:36.637 "name": "BaseBdev1", 00:07:36.637 "uuid": "1a6bf71e-e3a3-5d31-b981-f6aa614f85cc", 00:07:36.637 "is_configured": true, 00:07:36.637 "data_offset": 2048, 00:07:36.637 "data_size": 63488 00:07:36.637 }, 00:07:36.637 { 00:07:36.637 "name": "BaseBdev2", 00:07:36.637 "uuid": "377fc207-d14a-5f2e-89a2-f81da826b4cf", 00:07:36.637 "is_configured": true, 00:07:36.637 "data_offset": 2048, 00:07:36.637 "data_size": 63488 00:07:36.637 } 00:07:36.637 ] 00:07:36.637 }' 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.637 09:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.205 09:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:37.205 09:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:37.205 [2024-11-20 09:20:02.555926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.140 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.141 "name": "raid_bdev1", 00:07:38.141 "uuid": "d365e922-b34a-4ba3-a31f-e2ace416f878", 00:07:38.141 "strip_size_kb": 64, 00:07:38.141 "state": "online", 00:07:38.141 "raid_level": "raid0", 00:07:38.141 "superblock": true, 00:07:38.141 "num_base_bdevs": 2, 00:07:38.141 "num_base_bdevs_discovered": 2, 00:07:38.141 "num_base_bdevs_operational": 2, 00:07:38.141 "base_bdevs_list": [ 00:07:38.141 { 00:07:38.141 "name": "BaseBdev1", 00:07:38.141 "uuid": "1a6bf71e-e3a3-5d31-b981-f6aa614f85cc", 00:07:38.141 "is_configured": true, 00:07:38.141 "data_offset": 2048, 00:07:38.141 "data_size": 63488 00:07:38.141 }, 00:07:38.141 { 00:07:38.141 "name": "BaseBdev2", 00:07:38.141 "uuid": "377fc207-d14a-5f2e-89a2-f81da826b4cf", 00:07:38.141 "is_configured": true, 00:07:38.141 "data_offset": 2048, 00:07:38.141 "data_size": 63488 00:07:38.141 } 00:07:38.141 ] 00:07:38.141 }' 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.141 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.710 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:38.710 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.710 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.710 [2024-11-20 09:20:03.925255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.710 [2024-11-20 09:20:03.925382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.710 [2024-11-20 09:20:03.928589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.710 [2024-11-20 09:20:03.928643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.710 [2024-11-20 09:20:03.928680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.710 [2024-11-20 09:20:03.928694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:38.710 { 00:07:38.710 "results": [ 00:07:38.710 { 00:07:38.710 "job": "raid_bdev1", 00:07:38.710 "core_mask": "0x1", 00:07:38.710 "workload": "randrw", 00:07:38.710 "percentage": 50, 00:07:38.710 "status": "finished", 00:07:38.710 "queue_depth": 1, 00:07:38.710 "io_size": 131072, 00:07:38.710 "runtime": 1.369735, 00:07:38.710 "iops": 13121.516205689422, 00:07:38.710 "mibps": 1640.1895257111778, 00:07:38.710 "io_failed": 1, 00:07:38.710 "io_timeout": 0, 00:07:38.710 "avg_latency_us": 105.69333229026108, 00:07:38.710 "min_latency_us": 31.748471615720526, 00:07:38.710 "max_latency_us": 1788.646288209607 00:07:38.710 } 00:07:38.710 ], 00:07:38.710 "core_count": 1 00:07:38.710 } 00:07:38.710 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61592 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61592 ']' 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61592 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61592 00:07:38.711 killing process with pid 61592 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61592' 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61592 00:07:38.711 [2024-11-20 09:20:03.981266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.711 09:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61592 00:07:38.711 [2024-11-20 09:20:04.130731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.092 09:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:40.092 09:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.i9Uo6nqocv 00:07:40.092 09:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:40.093 09:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:40.093 09:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:40.093 09:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.093 09:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:40.093 09:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:40.093 00:07:40.093 real 0m4.634s 00:07:40.093 user 0m5.655s 00:07:40.093 sys 0m0.594s 00:07:40.093 09:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.093 09:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.093 ************************************ 00:07:40.093 END TEST raid_read_error_test 00:07:40.093 ************************************ 00:07:40.093 09:20:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:40.093 09:20:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.093 09:20:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.093 09:20:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.093 ************************************ 00:07:40.093 START TEST raid_write_error_test 00:07:40.093 ************************************ 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8Ftib1m0A8 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61738 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61738 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61738 ']' 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.093 09:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.353 [2024-11-20 09:20:05.564228] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:40.353 [2024-11-20 09:20:05.564346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61738 ] 00:07:40.353 [2024-11-20 09:20:05.722063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.613 [2024-11-20 09:20:05.846189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.613 [2024-11-20 09:20:06.062826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.613 [2024-11-20 09:20:06.062986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 BaseBdev1_malloc 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 true 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 [2024-11-20 09:20:06.513239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:41.182 [2024-11-20 09:20:06.513404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.182 [2024-11-20 09:20:06.513455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:41.182 [2024-11-20 09:20:06.513472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.182 [2024-11-20 09:20:06.516093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.182 [2024-11-20 09:20:06.516144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:41.182 BaseBdev1 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 BaseBdev2_malloc 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 true 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 [2024-11-20 09:20:06.583597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:41.182 [2024-11-20 09:20:06.583697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.182 [2024-11-20 09:20:06.583737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:41.182 [2024-11-20 09:20:06.583752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.182 [2024-11-20 09:20:06.586359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.182 [2024-11-20 09:20:06.586412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:41.182 BaseBdev2 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.182 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.182 [2024-11-20 09:20:06.599718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.182 [2024-11-20 09:20:06.602246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.182 [2024-11-20 09:20:06.602565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:41.182 [2024-11-20 09:20:06.602630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:41.182 [2024-11-20 09:20:06.602972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:41.182 [2024-11-20 09:20:06.603231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:41.182 [2024-11-20 09:20:06.603284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:41.183 [2024-11-20 09:20:06.603606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.183 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.442 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.442 "name": "raid_bdev1", 00:07:41.442 "uuid": "baeeb1e6-b9c4-4774-b8d6-aa05d396b333", 00:07:41.442 "strip_size_kb": 64, 00:07:41.442 "state": "online", 00:07:41.442 "raid_level": "raid0", 00:07:41.442 "superblock": true, 00:07:41.442 "num_base_bdevs": 2, 00:07:41.442 "num_base_bdevs_discovered": 2, 00:07:41.442 "num_base_bdevs_operational": 2, 00:07:41.442 "base_bdevs_list": [ 00:07:41.442 { 00:07:41.442 "name": "BaseBdev1", 00:07:41.442 "uuid": "cea24a4b-dd99-59ea-93ea-3fce1548cb48", 00:07:41.442 "is_configured": true, 00:07:41.442 "data_offset": 2048, 00:07:41.442 "data_size": 63488 00:07:41.442 }, 00:07:41.442 { 00:07:41.442 "name": "BaseBdev2", 00:07:41.442 "uuid": "319cdb51-96c6-535f-94b6-7d1623860669", 00:07:41.442 "is_configured": true, 00:07:41.442 "data_offset": 2048, 00:07:41.442 "data_size": 63488 00:07:41.442 } 00:07:41.442 ] 00:07:41.442 }' 00:07:41.442 09:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.442 09:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.715 09:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:41.715 09:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:41.975 [2024-11-20 09:20:07.228235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.912 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.912 "name": "raid_bdev1", 00:07:42.912 "uuid": "baeeb1e6-b9c4-4774-b8d6-aa05d396b333", 00:07:42.912 "strip_size_kb": 64, 00:07:42.912 "state": "online", 00:07:42.912 "raid_level": "raid0", 00:07:42.912 "superblock": true, 00:07:42.912 "num_base_bdevs": 2, 00:07:42.912 "num_base_bdevs_discovered": 2, 00:07:42.912 "num_base_bdevs_operational": 2, 00:07:42.912 "base_bdevs_list": [ 00:07:42.912 { 00:07:42.913 "name": "BaseBdev1", 00:07:42.913 "uuid": "cea24a4b-dd99-59ea-93ea-3fce1548cb48", 00:07:42.913 "is_configured": true, 00:07:42.913 "data_offset": 2048, 00:07:42.913 "data_size": 63488 00:07:42.913 }, 00:07:42.913 { 00:07:42.913 "name": "BaseBdev2", 00:07:42.913 "uuid": "319cdb51-96c6-535f-94b6-7d1623860669", 00:07:42.913 "is_configured": true, 00:07:42.913 "data_offset": 2048, 00:07:42.913 "data_size": 63488 00:07:42.913 } 00:07:42.913 ] 00:07:42.913 }' 00:07:42.913 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.913 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.174 [2024-11-20 09:20:08.573703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.174 [2024-11-20 09:20:08.573833] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.174 [2024-11-20 09:20:08.577154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.174 [2024-11-20 09:20:08.577289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.174 [2024-11-20 09:20:08.577349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.174 [2024-11-20 09:20:08.577405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:43.174 { 00:07:43.174 "results": [ 00:07:43.174 { 00:07:43.174 "job": "raid_bdev1", 00:07:43.174 "core_mask": "0x1", 00:07:43.174 "workload": "randrw", 00:07:43.174 "percentage": 50, 00:07:43.174 "status": "finished", 00:07:43.174 "queue_depth": 1, 00:07:43.174 "io_size": 131072, 00:07:43.174 "runtime": 1.34611, 00:07:43.174 "iops": 13103.683948562895, 00:07:43.174 "mibps": 1637.960493570362, 00:07:43.174 "io_failed": 1, 00:07:43.174 "io_timeout": 0, 00:07:43.174 "avg_latency_us": 105.97692501163493, 00:07:43.174 "min_latency_us": 31.972052401746726, 00:07:43.174 "max_latency_us": 1774.3371179039302 00:07:43.174 } 00:07:43.174 ], 00:07:43.174 "core_count": 1 00:07:43.174 } 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61738 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61738 ']' 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61738 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61738 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.174 killing process with pid 61738 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61738' 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61738 00:07:43.174 [2024-11-20 09:20:08.615222] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.174 09:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61738 00:07:43.433 [2024-11-20 09:20:08.780577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8Ftib1m0A8 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:44.813 00:07:44.813 real 0m4.734s 00:07:44.813 user 0m5.710s 00:07:44.813 sys 0m0.572s 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.813 09:20:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.813 ************************************ 00:07:44.813 END TEST raid_write_error_test 00:07:44.813 ************************************ 00:07:44.813 09:20:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:44.813 09:20:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:44.813 09:20:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:44.813 09:20:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.813 09:20:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.813 ************************************ 00:07:44.813 START TEST raid_state_function_test 00:07:44.813 ************************************ 00:07:44.813 09:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:44.813 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:44.813 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:44.813 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:44.813 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:44.813 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:45.073 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:45.074 Process raid pid: 61881 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61881 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61881' 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61881 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61881 ']' 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.074 09:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.074 [2024-11-20 09:20:10.371203] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:45.074 [2024-11-20 09:20:10.371457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.333 [2024-11-20 09:20:10.538591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.333 [2024-11-20 09:20:10.676861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.593 [2024-11-20 09:20:10.928771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.593 [2024-11-20 09:20:10.928955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.163 [2024-11-20 09:20:11.347136] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.163 [2024-11-20 09:20:11.347217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.163 [2024-11-20 09:20:11.347230] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.163 [2024-11-20 09:20:11.347243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.163 "name": "Existed_Raid", 00:07:46.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.163 "strip_size_kb": 64, 00:07:46.163 "state": "configuring", 00:07:46.163 "raid_level": "concat", 00:07:46.163 "superblock": false, 00:07:46.163 "num_base_bdevs": 2, 00:07:46.163 "num_base_bdevs_discovered": 0, 00:07:46.163 "num_base_bdevs_operational": 2, 00:07:46.163 "base_bdevs_list": [ 00:07:46.163 { 00:07:46.163 "name": "BaseBdev1", 00:07:46.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.163 "is_configured": false, 00:07:46.163 "data_offset": 0, 00:07:46.163 "data_size": 0 00:07:46.163 }, 00:07:46.163 { 00:07:46.163 "name": "BaseBdev2", 00:07:46.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.163 "is_configured": false, 00:07:46.163 "data_offset": 0, 00:07:46.163 "data_size": 0 00:07:46.163 } 00:07:46.163 ] 00:07:46.163 }' 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.163 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.423 [2024-11-20 09:20:11.842363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.423 [2024-11-20 09:20:11.842551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.423 [2024-11-20 09:20:11.854343] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.423 [2024-11-20 09:20:11.854425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.423 [2024-11-20 09:20:11.854456] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.423 [2024-11-20 09:20:11.854470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.423 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.684 [2024-11-20 09:20:11.908700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.684 BaseBdev1 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.684 [ 00:07:46.684 { 00:07:46.684 "name": "BaseBdev1", 00:07:46.684 "aliases": [ 00:07:46.684 "e5962ed5-0050-49d3-a8b2-db449187f9e2" 00:07:46.684 ], 00:07:46.684 "product_name": "Malloc disk", 00:07:46.684 "block_size": 512, 00:07:46.684 "num_blocks": 65536, 00:07:46.684 "uuid": "e5962ed5-0050-49d3-a8b2-db449187f9e2", 00:07:46.684 "assigned_rate_limits": { 00:07:46.684 "rw_ios_per_sec": 0, 00:07:46.684 "rw_mbytes_per_sec": 0, 00:07:46.684 "r_mbytes_per_sec": 0, 00:07:46.684 "w_mbytes_per_sec": 0 00:07:46.684 }, 00:07:46.684 "claimed": true, 00:07:46.684 "claim_type": "exclusive_write", 00:07:46.684 "zoned": false, 00:07:46.684 "supported_io_types": { 00:07:46.684 "read": true, 00:07:46.684 "write": true, 00:07:46.684 "unmap": true, 00:07:46.684 "flush": true, 00:07:46.684 "reset": true, 00:07:46.684 "nvme_admin": false, 00:07:46.684 "nvme_io": false, 00:07:46.684 "nvme_io_md": false, 00:07:46.684 "write_zeroes": true, 00:07:46.684 "zcopy": true, 00:07:46.684 "get_zone_info": false, 00:07:46.684 "zone_management": false, 00:07:46.684 "zone_append": false, 00:07:46.684 "compare": false, 00:07:46.684 "compare_and_write": false, 00:07:46.684 "abort": true, 00:07:46.684 "seek_hole": false, 00:07:46.684 "seek_data": false, 00:07:46.684 "copy": true, 00:07:46.684 "nvme_iov_md": false 00:07:46.684 }, 00:07:46.684 "memory_domains": [ 00:07:46.684 { 00:07:46.684 "dma_device_id": "system", 00:07:46.684 "dma_device_type": 1 00:07:46.684 }, 00:07:46.684 { 00:07:46.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.684 "dma_device_type": 2 00:07:46.684 } 00:07:46.684 ], 00:07:46.684 "driver_specific": {} 00:07:46.684 } 00:07:46.684 ] 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.684 09:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.684 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.684 "name": "Existed_Raid", 00:07:46.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.684 "strip_size_kb": 64, 00:07:46.684 "state": "configuring", 00:07:46.684 "raid_level": "concat", 00:07:46.684 "superblock": false, 00:07:46.684 "num_base_bdevs": 2, 00:07:46.684 "num_base_bdevs_discovered": 1, 00:07:46.684 "num_base_bdevs_operational": 2, 00:07:46.684 "base_bdevs_list": [ 00:07:46.684 { 00:07:46.684 "name": "BaseBdev1", 00:07:46.684 "uuid": "e5962ed5-0050-49d3-a8b2-db449187f9e2", 00:07:46.684 "is_configured": true, 00:07:46.684 "data_offset": 0, 00:07:46.684 "data_size": 65536 00:07:46.684 }, 00:07:46.684 { 00:07:46.684 "name": "BaseBdev2", 00:07:46.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.684 "is_configured": false, 00:07:46.684 "data_offset": 0, 00:07:46.684 "data_size": 0 00:07:46.684 } 00:07:46.684 ] 00:07:46.684 }' 00:07:46.684 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.684 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.252 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.253 [2024-11-20 09:20:12.419962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.253 [2024-11-20 09:20:12.420146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.253 [2024-11-20 09:20:12.432029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.253 [2024-11-20 09:20:12.434285] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.253 [2024-11-20 09:20:12.434402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.253 "name": "Existed_Raid", 00:07:47.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.253 "strip_size_kb": 64, 00:07:47.253 "state": "configuring", 00:07:47.253 "raid_level": "concat", 00:07:47.253 "superblock": false, 00:07:47.253 "num_base_bdevs": 2, 00:07:47.253 "num_base_bdevs_discovered": 1, 00:07:47.253 "num_base_bdevs_operational": 2, 00:07:47.253 "base_bdevs_list": [ 00:07:47.253 { 00:07:47.253 "name": "BaseBdev1", 00:07:47.253 "uuid": "e5962ed5-0050-49d3-a8b2-db449187f9e2", 00:07:47.253 "is_configured": true, 00:07:47.253 "data_offset": 0, 00:07:47.253 "data_size": 65536 00:07:47.253 }, 00:07:47.253 { 00:07:47.253 "name": "BaseBdev2", 00:07:47.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.253 "is_configured": false, 00:07:47.253 "data_offset": 0, 00:07:47.253 "data_size": 0 00:07:47.253 } 00:07:47.253 ] 00:07:47.253 }' 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.253 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.512 [2024-11-20 09:20:12.960184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.512 [2024-11-20 09:20:12.960338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.512 [2024-11-20 09:20:12.960369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:47.512 [2024-11-20 09:20:12.960736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.512 [2024-11-20 09:20:12.960972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.512 [2024-11-20 09:20:12.961031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:47.512 [2024-11-20 09:20:12.961397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.512 BaseBdev2 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.512 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.772 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.772 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.772 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.772 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.772 09:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.772 [ 00:07:47.772 { 00:07:47.772 "name": "BaseBdev2", 00:07:47.772 "aliases": [ 00:07:47.772 "42904805-acfe-4393-941e-8d30d500dc44" 00:07:47.772 ], 00:07:47.772 "product_name": "Malloc disk", 00:07:47.772 "block_size": 512, 00:07:47.772 "num_blocks": 65536, 00:07:47.772 "uuid": "42904805-acfe-4393-941e-8d30d500dc44", 00:07:47.772 "assigned_rate_limits": { 00:07:47.772 "rw_ios_per_sec": 0, 00:07:47.772 "rw_mbytes_per_sec": 0, 00:07:47.772 "r_mbytes_per_sec": 0, 00:07:47.772 "w_mbytes_per_sec": 0 00:07:47.772 }, 00:07:47.772 "claimed": true, 00:07:47.772 "claim_type": "exclusive_write", 00:07:47.772 "zoned": false, 00:07:47.772 "supported_io_types": { 00:07:47.772 "read": true, 00:07:47.772 "write": true, 00:07:47.772 "unmap": true, 00:07:47.772 "flush": true, 00:07:47.772 "reset": true, 00:07:47.772 "nvme_admin": false, 00:07:47.772 "nvme_io": false, 00:07:47.772 "nvme_io_md": false, 00:07:47.772 "write_zeroes": true, 00:07:47.772 "zcopy": true, 00:07:47.772 "get_zone_info": false, 00:07:47.772 "zone_management": false, 00:07:47.772 "zone_append": false, 00:07:47.772 "compare": false, 00:07:47.772 "compare_and_write": false, 00:07:47.772 "abort": true, 00:07:47.772 "seek_hole": false, 00:07:47.772 "seek_data": false, 00:07:47.772 "copy": true, 00:07:47.772 "nvme_iov_md": false 00:07:47.772 }, 00:07:47.772 "memory_domains": [ 00:07:47.772 { 00:07:47.772 "dma_device_id": "system", 00:07:47.772 "dma_device_type": 1 00:07:47.772 }, 00:07:47.772 { 00:07:47.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.772 "dma_device_type": 2 00:07:47.772 } 00:07:47.772 ], 00:07:47.772 "driver_specific": {} 00:07:47.772 } 00:07:47.772 ] 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.772 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.772 "name": "Existed_Raid", 00:07:47.772 "uuid": "219635e2-24f1-46be-a8b0-8f385c40198b", 00:07:47.772 "strip_size_kb": 64, 00:07:47.772 "state": "online", 00:07:47.772 "raid_level": "concat", 00:07:47.772 "superblock": false, 00:07:47.772 "num_base_bdevs": 2, 00:07:47.772 "num_base_bdevs_discovered": 2, 00:07:47.772 "num_base_bdevs_operational": 2, 00:07:47.772 "base_bdevs_list": [ 00:07:47.772 { 00:07:47.772 "name": "BaseBdev1", 00:07:47.772 "uuid": "e5962ed5-0050-49d3-a8b2-db449187f9e2", 00:07:47.772 "is_configured": true, 00:07:47.772 "data_offset": 0, 00:07:47.773 "data_size": 65536 00:07:47.773 }, 00:07:47.773 { 00:07:47.773 "name": "BaseBdev2", 00:07:47.773 "uuid": "42904805-acfe-4393-941e-8d30d500dc44", 00:07:47.773 "is_configured": true, 00:07:47.773 "data_offset": 0, 00:07:47.773 "data_size": 65536 00:07:47.773 } 00:07:47.773 ] 00:07:47.773 }' 00:07:47.773 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.773 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.031 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.031 [2024-11-20 09:20:13.479943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.293 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.293 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.293 "name": "Existed_Raid", 00:07:48.293 "aliases": [ 00:07:48.293 "219635e2-24f1-46be-a8b0-8f385c40198b" 00:07:48.293 ], 00:07:48.293 "product_name": "Raid Volume", 00:07:48.293 "block_size": 512, 00:07:48.293 "num_blocks": 131072, 00:07:48.293 "uuid": "219635e2-24f1-46be-a8b0-8f385c40198b", 00:07:48.293 "assigned_rate_limits": { 00:07:48.293 "rw_ios_per_sec": 0, 00:07:48.293 "rw_mbytes_per_sec": 0, 00:07:48.293 "r_mbytes_per_sec": 0, 00:07:48.293 "w_mbytes_per_sec": 0 00:07:48.293 }, 00:07:48.293 "claimed": false, 00:07:48.293 "zoned": false, 00:07:48.293 "supported_io_types": { 00:07:48.293 "read": true, 00:07:48.293 "write": true, 00:07:48.293 "unmap": true, 00:07:48.293 "flush": true, 00:07:48.293 "reset": true, 00:07:48.293 "nvme_admin": false, 00:07:48.293 "nvme_io": false, 00:07:48.293 "nvme_io_md": false, 00:07:48.293 "write_zeroes": true, 00:07:48.293 "zcopy": false, 00:07:48.293 "get_zone_info": false, 00:07:48.293 "zone_management": false, 00:07:48.293 "zone_append": false, 00:07:48.293 "compare": false, 00:07:48.293 "compare_and_write": false, 00:07:48.293 "abort": false, 00:07:48.293 "seek_hole": false, 00:07:48.293 "seek_data": false, 00:07:48.293 "copy": false, 00:07:48.293 "nvme_iov_md": false 00:07:48.293 }, 00:07:48.293 "memory_domains": [ 00:07:48.293 { 00:07:48.293 "dma_device_id": "system", 00:07:48.293 "dma_device_type": 1 00:07:48.293 }, 00:07:48.293 { 00:07:48.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.293 "dma_device_type": 2 00:07:48.293 }, 00:07:48.293 { 00:07:48.294 "dma_device_id": "system", 00:07:48.294 "dma_device_type": 1 00:07:48.294 }, 00:07:48.294 { 00:07:48.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.294 "dma_device_type": 2 00:07:48.294 } 00:07:48.294 ], 00:07:48.294 "driver_specific": { 00:07:48.294 "raid": { 00:07:48.294 "uuid": "219635e2-24f1-46be-a8b0-8f385c40198b", 00:07:48.294 "strip_size_kb": 64, 00:07:48.294 "state": "online", 00:07:48.294 "raid_level": "concat", 00:07:48.294 "superblock": false, 00:07:48.294 "num_base_bdevs": 2, 00:07:48.294 "num_base_bdevs_discovered": 2, 00:07:48.294 "num_base_bdevs_operational": 2, 00:07:48.294 "base_bdevs_list": [ 00:07:48.294 { 00:07:48.294 "name": "BaseBdev1", 00:07:48.294 "uuid": "e5962ed5-0050-49d3-a8b2-db449187f9e2", 00:07:48.294 "is_configured": true, 00:07:48.294 "data_offset": 0, 00:07:48.294 "data_size": 65536 00:07:48.294 }, 00:07:48.294 { 00:07:48.294 "name": "BaseBdev2", 00:07:48.294 "uuid": "42904805-acfe-4393-941e-8d30d500dc44", 00:07:48.294 "is_configured": true, 00:07:48.294 "data_offset": 0, 00:07:48.294 "data_size": 65536 00:07:48.294 } 00:07:48.294 ] 00:07:48.294 } 00:07:48.294 } 00:07:48.294 }' 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:48.294 BaseBdev2' 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.294 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.294 [2024-11-20 09:20:13.723666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.294 [2024-11-20 09:20:13.723720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.294 [2024-11-20 09:20:13.723779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.577 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.578 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.578 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.578 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.578 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.578 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.578 "name": "Existed_Raid", 00:07:48.578 "uuid": "219635e2-24f1-46be-a8b0-8f385c40198b", 00:07:48.578 "strip_size_kb": 64, 00:07:48.578 "state": "offline", 00:07:48.578 "raid_level": "concat", 00:07:48.578 "superblock": false, 00:07:48.578 "num_base_bdevs": 2, 00:07:48.578 "num_base_bdevs_discovered": 1, 00:07:48.578 "num_base_bdevs_operational": 1, 00:07:48.578 "base_bdevs_list": [ 00:07:48.578 { 00:07:48.578 "name": null, 00:07:48.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.578 "is_configured": false, 00:07:48.578 "data_offset": 0, 00:07:48.578 "data_size": 65536 00:07:48.578 }, 00:07:48.578 { 00:07:48.578 "name": "BaseBdev2", 00:07:48.578 "uuid": "42904805-acfe-4393-941e-8d30d500dc44", 00:07:48.578 "is_configured": true, 00:07:48.578 "data_offset": 0, 00:07:48.578 "data_size": 65536 00:07:48.578 } 00:07:48.578 ] 00:07:48.578 }' 00:07:48.578 09:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.578 09:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.146 [2024-11-20 09:20:14.394111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:49.146 [2024-11-20 09:20:14.394208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61881 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61881 ']' 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61881 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.146 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61881 00:07:49.405 killing process with pid 61881 00:07:49.405 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.405 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.405 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61881' 00:07:49.405 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61881 00:07:49.405 [2024-11-20 09:20:14.614063] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.405 09:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61881 00:07:49.405 [2024-11-20 09:20:14.633867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.784 ************************************ 00:07:50.784 END TEST raid_state_function_test 00:07:50.784 ************************************ 00:07:50.784 09:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:50.784 00:07:50.784 real 0m5.709s 00:07:50.784 user 0m8.210s 00:07:50.784 sys 0m0.905s 00:07:50.784 09:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.784 09:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.784 09:20:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:50.784 09:20:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.784 09:20:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.784 09:20:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.784 ************************************ 00:07:50.784 START TEST raid_state_function_test_sb 00:07:50.784 ************************************ 00:07:50.784 09:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:50.784 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:50.784 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:50.784 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:50.784 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:50.784 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:50.784 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:50.785 Process raid pid: 62140 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62140 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62140' 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62140 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62140 ']' 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.785 09:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.785 [2024-11-20 09:20:16.162087] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:50.785 [2024-11-20 09:20:16.162404] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.044 [2024-11-20 09:20:16.346788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.044 [2024-11-20 09:20:16.488767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.303 [2024-11-20 09:20:16.719809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.303 [2024-11-20 09:20:16.719933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.873 [2024-11-20 09:20:17.085648] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.873 [2024-11-20 09:20:17.085835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.873 [2024-11-20 09:20:17.085870] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.873 [2024-11-20 09:20:17.085899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.873 "name": "Existed_Raid", 00:07:51.873 "uuid": "fd5741c1-3831-4291-aaa7-b45f259d4246", 00:07:51.873 "strip_size_kb": 64, 00:07:51.873 "state": "configuring", 00:07:51.873 "raid_level": "concat", 00:07:51.873 "superblock": true, 00:07:51.873 "num_base_bdevs": 2, 00:07:51.873 "num_base_bdevs_discovered": 0, 00:07:51.873 "num_base_bdevs_operational": 2, 00:07:51.873 "base_bdevs_list": [ 00:07:51.873 { 00:07:51.873 "name": "BaseBdev1", 00:07:51.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.873 "is_configured": false, 00:07:51.873 "data_offset": 0, 00:07:51.873 "data_size": 0 00:07:51.873 }, 00:07:51.873 { 00:07:51.873 "name": "BaseBdev2", 00:07:51.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.873 "is_configured": false, 00:07:51.873 "data_offset": 0, 00:07:51.873 "data_size": 0 00:07:51.873 } 00:07:51.873 ] 00:07:51.873 }' 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.873 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.139 [2024-11-20 09:20:17.532776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.139 [2024-11-20 09:20:17.532915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.139 [2024-11-20 09:20:17.544779] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.139 [2024-11-20 09:20:17.544840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.139 [2024-11-20 09:20:17.544852] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.139 [2024-11-20 09:20:17.544870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.139 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.438 [2024-11-20 09:20:17.595915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.438 BaseBdev1 00:07:52.438 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.438 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:52.438 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:52.438 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.439 [ 00:07:52.439 { 00:07:52.439 "name": "BaseBdev1", 00:07:52.439 "aliases": [ 00:07:52.439 "2227f8ef-5609-4495-b435-82e06f1cff0f" 00:07:52.439 ], 00:07:52.439 "product_name": "Malloc disk", 00:07:52.439 "block_size": 512, 00:07:52.439 "num_blocks": 65536, 00:07:52.439 "uuid": "2227f8ef-5609-4495-b435-82e06f1cff0f", 00:07:52.439 "assigned_rate_limits": { 00:07:52.439 "rw_ios_per_sec": 0, 00:07:52.439 "rw_mbytes_per_sec": 0, 00:07:52.439 "r_mbytes_per_sec": 0, 00:07:52.439 "w_mbytes_per_sec": 0 00:07:52.439 }, 00:07:52.439 "claimed": true, 00:07:52.439 "claim_type": "exclusive_write", 00:07:52.439 "zoned": false, 00:07:52.439 "supported_io_types": { 00:07:52.439 "read": true, 00:07:52.439 "write": true, 00:07:52.439 "unmap": true, 00:07:52.439 "flush": true, 00:07:52.439 "reset": true, 00:07:52.439 "nvme_admin": false, 00:07:52.439 "nvme_io": false, 00:07:52.439 "nvme_io_md": false, 00:07:52.439 "write_zeroes": true, 00:07:52.439 "zcopy": true, 00:07:52.439 "get_zone_info": false, 00:07:52.439 "zone_management": false, 00:07:52.439 "zone_append": false, 00:07:52.439 "compare": false, 00:07:52.439 "compare_and_write": false, 00:07:52.439 "abort": true, 00:07:52.439 "seek_hole": false, 00:07:52.439 "seek_data": false, 00:07:52.439 "copy": true, 00:07:52.439 "nvme_iov_md": false 00:07:52.439 }, 00:07:52.439 "memory_domains": [ 00:07:52.439 { 00:07:52.439 "dma_device_id": "system", 00:07:52.439 "dma_device_type": 1 00:07:52.439 }, 00:07:52.439 { 00:07:52.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.439 "dma_device_type": 2 00:07:52.439 } 00:07:52.439 ], 00:07:52.439 "driver_specific": {} 00:07:52.439 } 00:07:52.439 ] 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.439 "name": "Existed_Raid", 00:07:52.439 "uuid": "78976d6a-780b-42b4-b9a7-2d82783fdedd", 00:07:52.439 "strip_size_kb": 64, 00:07:52.439 "state": "configuring", 00:07:52.439 "raid_level": "concat", 00:07:52.439 "superblock": true, 00:07:52.439 "num_base_bdevs": 2, 00:07:52.439 "num_base_bdevs_discovered": 1, 00:07:52.439 "num_base_bdevs_operational": 2, 00:07:52.439 "base_bdevs_list": [ 00:07:52.439 { 00:07:52.439 "name": "BaseBdev1", 00:07:52.439 "uuid": "2227f8ef-5609-4495-b435-82e06f1cff0f", 00:07:52.439 "is_configured": true, 00:07:52.439 "data_offset": 2048, 00:07:52.439 "data_size": 63488 00:07:52.439 }, 00:07:52.439 { 00:07:52.439 "name": "BaseBdev2", 00:07:52.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.439 "is_configured": false, 00:07:52.439 "data_offset": 0, 00:07:52.439 "data_size": 0 00:07:52.439 } 00:07:52.439 ] 00:07:52.439 }' 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.439 09:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.698 [2024-11-20 09:20:18.083248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.698 [2024-11-20 09:20:18.083321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.698 [2024-11-20 09:20:18.095288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.698 [2024-11-20 09:20:18.097518] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.698 [2024-11-20 09:20:18.097621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.698 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.957 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.957 "name": "Existed_Raid", 00:07:52.957 "uuid": "db2eb6b8-016c-4f79-b3a7-59eaeb296eec", 00:07:52.957 "strip_size_kb": 64, 00:07:52.957 "state": "configuring", 00:07:52.957 "raid_level": "concat", 00:07:52.957 "superblock": true, 00:07:52.957 "num_base_bdevs": 2, 00:07:52.957 "num_base_bdevs_discovered": 1, 00:07:52.957 "num_base_bdevs_operational": 2, 00:07:52.957 "base_bdevs_list": [ 00:07:52.957 { 00:07:52.957 "name": "BaseBdev1", 00:07:52.957 "uuid": "2227f8ef-5609-4495-b435-82e06f1cff0f", 00:07:52.957 "is_configured": true, 00:07:52.957 "data_offset": 2048, 00:07:52.957 "data_size": 63488 00:07:52.957 }, 00:07:52.957 { 00:07:52.957 "name": "BaseBdev2", 00:07:52.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.957 "is_configured": false, 00:07:52.957 "data_offset": 0, 00:07:52.957 "data_size": 0 00:07:52.957 } 00:07:52.957 ] 00:07:52.958 }' 00:07:52.958 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.958 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.218 [2024-11-20 09:20:18.611950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.218 [2024-11-20 09:20:18.612242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:53.218 [2024-11-20 09:20:18.612258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:53.218 [2024-11-20 09:20:18.612601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:53.218 [2024-11-20 09:20:18.612776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:53.218 [2024-11-20 09:20:18.612792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:53.218 [2024-11-20 09:20:18.612950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.218 BaseBdev2 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.218 [ 00:07:53.218 { 00:07:53.218 "name": "BaseBdev2", 00:07:53.218 "aliases": [ 00:07:53.218 "11ecf7a9-024d-4dca-b1d8-5be4d5379500" 00:07:53.218 ], 00:07:53.218 "product_name": "Malloc disk", 00:07:53.218 "block_size": 512, 00:07:53.218 "num_blocks": 65536, 00:07:53.218 "uuid": "11ecf7a9-024d-4dca-b1d8-5be4d5379500", 00:07:53.218 "assigned_rate_limits": { 00:07:53.218 "rw_ios_per_sec": 0, 00:07:53.218 "rw_mbytes_per_sec": 0, 00:07:53.218 "r_mbytes_per_sec": 0, 00:07:53.218 "w_mbytes_per_sec": 0 00:07:53.218 }, 00:07:53.218 "claimed": true, 00:07:53.218 "claim_type": "exclusive_write", 00:07:53.218 "zoned": false, 00:07:53.218 "supported_io_types": { 00:07:53.218 "read": true, 00:07:53.218 "write": true, 00:07:53.218 "unmap": true, 00:07:53.218 "flush": true, 00:07:53.218 "reset": true, 00:07:53.218 "nvme_admin": false, 00:07:53.218 "nvme_io": false, 00:07:53.218 "nvme_io_md": false, 00:07:53.218 "write_zeroes": true, 00:07:53.218 "zcopy": true, 00:07:53.218 "get_zone_info": false, 00:07:53.218 "zone_management": false, 00:07:53.218 "zone_append": false, 00:07:53.218 "compare": false, 00:07:53.218 "compare_and_write": false, 00:07:53.218 "abort": true, 00:07:53.218 "seek_hole": false, 00:07:53.218 "seek_data": false, 00:07:53.218 "copy": true, 00:07:53.218 "nvme_iov_md": false 00:07:53.218 }, 00:07:53.218 "memory_domains": [ 00:07:53.218 { 00:07:53.218 "dma_device_id": "system", 00:07:53.218 "dma_device_type": 1 00:07:53.218 }, 00:07:53.218 { 00:07:53.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.218 "dma_device_type": 2 00:07:53.218 } 00:07:53.218 ], 00:07:53.218 "driver_specific": {} 00:07:53.218 } 00:07:53.218 ] 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.218 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.479 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.479 "name": "Existed_Raid", 00:07:53.479 "uuid": "db2eb6b8-016c-4f79-b3a7-59eaeb296eec", 00:07:53.479 "strip_size_kb": 64, 00:07:53.479 "state": "online", 00:07:53.479 "raid_level": "concat", 00:07:53.479 "superblock": true, 00:07:53.479 "num_base_bdevs": 2, 00:07:53.479 "num_base_bdevs_discovered": 2, 00:07:53.479 "num_base_bdevs_operational": 2, 00:07:53.479 "base_bdevs_list": [ 00:07:53.479 { 00:07:53.479 "name": "BaseBdev1", 00:07:53.479 "uuid": "2227f8ef-5609-4495-b435-82e06f1cff0f", 00:07:53.479 "is_configured": true, 00:07:53.479 "data_offset": 2048, 00:07:53.479 "data_size": 63488 00:07:53.479 }, 00:07:53.479 { 00:07:53.479 "name": "BaseBdev2", 00:07:53.479 "uuid": "11ecf7a9-024d-4dca-b1d8-5be4d5379500", 00:07:53.479 "is_configured": true, 00:07:53.479 "data_offset": 2048, 00:07:53.479 "data_size": 63488 00:07:53.479 } 00:07:53.479 ] 00:07:53.479 }' 00:07:53.479 09:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.479 09:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.738 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:53.738 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:53.738 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:53.738 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:53.738 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:53.739 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:53.739 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:53.739 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.739 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:53.739 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.739 [2024-11-20 09:20:19.127593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.739 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.739 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:53.739 "name": "Existed_Raid", 00:07:53.739 "aliases": [ 00:07:53.739 "db2eb6b8-016c-4f79-b3a7-59eaeb296eec" 00:07:53.739 ], 00:07:53.739 "product_name": "Raid Volume", 00:07:53.739 "block_size": 512, 00:07:53.739 "num_blocks": 126976, 00:07:53.739 "uuid": "db2eb6b8-016c-4f79-b3a7-59eaeb296eec", 00:07:53.739 "assigned_rate_limits": { 00:07:53.739 "rw_ios_per_sec": 0, 00:07:53.739 "rw_mbytes_per_sec": 0, 00:07:53.739 "r_mbytes_per_sec": 0, 00:07:53.739 "w_mbytes_per_sec": 0 00:07:53.739 }, 00:07:53.739 "claimed": false, 00:07:53.739 "zoned": false, 00:07:53.739 "supported_io_types": { 00:07:53.739 "read": true, 00:07:53.739 "write": true, 00:07:53.739 "unmap": true, 00:07:53.739 "flush": true, 00:07:53.739 "reset": true, 00:07:53.739 "nvme_admin": false, 00:07:53.739 "nvme_io": false, 00:07:53.739 "nvme_io_md": false, 00:07:53.739 "write_zeroes": true, 00:07:53.739 "zcopy": false, 00:07:53.739 "get_zone_info": false, 00:07:53.739 "zone_management": false, 00:07:53.739 "zone_append": false, 00:07:53.739 "compare": false, 00:07:53.739 "compare_and_write": false, 00:07:53.739 "abort": false, 00:07:53.739 "seek_hole": false, 00:07:53.739 "seek_data": false, 00:07:53.739 "copy": false, 00:07:53.739 "nvme_iov_md": false 00:07:53.739 }, 00:07:53.739 "memory_domains": [ 00:07:53.739 { 00:07:53.739 "dma_device_id": "system", 00:07:53.739 "dma_device_type": 1 00:07:53.739 }, 00:07:53.739 { 00:07:53.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.739 "dma_device_type": 2 00:07:53.739 }, 00:07:53.739 { 00:07:53.739 "dma_device_id": "system", 00:07:53.739 "dma_device_type": 1 00:07:53.739 }, 00:07:53.739 { 00:07:53.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.739 "dma_device_type": 2 00:07:53.739 } 00:07:53.739 ], 00:07:53.739 "driver_specific": { 00:07:53.739 "raid": { 00:07:53.739 "uuid": "db2eb6b8-016c-4f79-b3a7-59eaeb296eec", 00:07:53.739 "strip_size_kb": 64, 00:07:53.739 "state": "online", 00:07:53.739 "raid_level": "concat", 00:07:53.739 "superblock": true, 00:07:53.739 "num_base_bdevs": 2, 00:07:53.739 "num_base_bdevs_discovered": 2, 00:07:53.739 "num_base_bdevs_operational": 2, 00:07:53.739 "base_bdevs_list": [ 00:07:53.739 { 00:07:53.739 "name": "BaseBdev1", 00:07:53.739 "uuid": "2227f8ef-5609-4495-b435-82e06f1cff0f", 00:07:53.739 "is_configured": true, 00:07:53.739 "data_offset": 2048, 00:07:53.739 "data_size": 63488 00:07:53.739 }, 00:07:53.739 { 00:07:53.739 "name": "BaseBdev2", 00:07:53.739 "uuid": "11ecf7a9-024d-4dca-b1d8-5be4d5379500", 00:07:53.739 "is_configured": true, 00:07:53.739 "data_offset": 2048, 00:07:53.739 "data_size": 63488 00:07:53.739 } 00:07:53.739 ] 00:07:53.739 } 00:07:53.739 } 00:07:53.739 }' 00:07:53.739 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:54.008 BaseBdev2' 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.008 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.008 [2024-11-20 09:20:19.370920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:54.008 [2024-11-20 09:20:19.371068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.008 [2024-11-20 09:20:19.371137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.267 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.267 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:54.267 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:54.267 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.267 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.267 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:54.267 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:54.267 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.267 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.268 "name": "Existed_Raid", 00:07:54.268 "uuid": "db2eb6b8-016c-4f79-b3a7-59eaeb296eec", 00:07:54.268 "strip_size_kb": 64, 00:07:54.268 "state": "offline", 00:07:54.268 "raid_level": "concat", 00:07:54.268 "superblock": true, 00:07:54.268 "num_base_bdevs": 2, 00:07:54.268 "num_base_bdevs_discovered": 1, 00:07:54.268 "num_base_bdevs_operational": 1, 00:07:54.268 "base_bdevs_list": [ 00:07:54.268 { 00:07:54.268 "name": null, 00:07:54.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.268 "is_configured": false, 00:07:54.268 "data_offset": 0, 00:07:54.268 "data_size": 63488 00:07:54.268 }, 00:07:54.268 { 00:07:54.268 "name": "BaseBdev2", 00:07:54.268 "uuid": "11ecf7a9-024d-4dca-b1d8-5be4d5379500", 00:07:54.268 "is_configured": true, 00:07:54.268 "data_offset": 2048, 00:07:54.268 "data_size": 63488 00:07:54.268 } 00:07:54.268 ] 00:07:54.268 }' 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.268 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.527 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:54.527 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.527 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.527 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:54.527 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.527 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.527 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.787 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:54.787 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:54.787 09:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:54.787 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.787 09:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.787 [2024-11-20 09:20:20.001303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:54.787 [2024-11-20 09:20:20.001494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62140 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62140 ']' 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62140 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62140 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.787 killing process with pid 62140 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62140' 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62140 00:07:54.787 [2024-11-20 09:20:20.221856] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.787 09:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62140 00:07:55.045 [2024-11-20 09:20:20.241895] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.082 09:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:56.082 00:07:56.082 real 0m5.486s 00:07:56.082 user 0m7.805s 00:07:56.082 sys 0m0.949s 00:07:56.082 09:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.082 09:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.082 ************************************ 00:07:56.082 END TEST raid_state_function_test_sb 00:07:56.082 ************************************ 00:07:56.342 09:20:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:56.342 09:20:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:56.342 09:20:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.342 09:20:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.342 ************************************ 00:07:56.342 START TEST raid_superblock_test 00:07:56.342 ************************************ 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62392 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62392 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62392 ']' 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.342 09:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.342 [2024-11-20 09:20:21.712596] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:07:56.342 [2024-11-20 09:20:21.712889] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62392 ] 00:07:56.601 [2024-11-20 09:20:21.899810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.601 [2024-11-20 09:20:22.041228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.860 [2024-11-20 09:20:22.298814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.860 [2024-11-20 09:20:22.299001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.429 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.430 malloc1 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.430 [2024-11-20 09:20:22.742753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.430 [2024-11-20 09:20:22.742957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.430 [2024-11-20 09:20:22.743028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:57.430 [2024-11-20 09:20:22.743074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.430 [2024-11-20 09:20:22.745724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.430 [2024-11-20 09:20:22.745844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.430 pt1 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.430 malloc2 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.430 [2024-11-20 09:20:22.801088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.430 [2024-11-20 09:20:22.801277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.430 [2024-11-20 09:20:22.801333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:57.430 [2024-11-20 09:20:22.801377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.430 [2024-11-20 09:20:22.803818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.430 [2024-11-20 09:20:22.803959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.430 pt2 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.430 [2024-11-20 09:20:22.813200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.430 [2024-11-20 09:20:22.815294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.430 [2024-11-20 09:20:22.815547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:57.430 [2024-11-20 09:20:22.815565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.430 [2024-11-20 09:20:22.815889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:57.430 [2024-11-20 09:20:22.816086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:57.430 [2024-11-20 09:20:22.816102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:57.430 [2024-11-20 09:20:22.816302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.430 "name": "raid_bdev1", 00:07:57.430 "uuid": "109d95b1-aab7-4063-90fd-e06e8fb060de", 00:07:57.430 "strip_size_kb": 64, 00:07:57.430 "state": "online", 00:07:57.430 "raid_level": "concat", 00:07:57.430 "superblock": true, 00:07:57.430 "num_base_bdevs": 2, 00:07:57.430 "num_base_bdevs_discovered": 2, 00:07:57.430 "num_base_bdevs_operational": 2, 00:07:57.430 "base_bdevs_list": [ 00:07:57.430 { 00:07:57.430 "name": "pt1", 00:07:57.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.430 "is_configured": true, 00:07:57.430 "data_offset": 2048, 00:07:57.430 "data_size": 63488 00:07:57.430 }, 00:07:57.430 { 00:07:57.430 "name": "pt2", 00:07:57.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.430 "is_configured": true, 00:07:57.430 "data_offset": 2048, 00:07:57.430 "data_size": 63488 00:07:57.430 } 00:07:57.430 ] 00:07:57.430 }' 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.430 09:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.998 [2024-11-20 09:20:23.300738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.998 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.998 "name": "raid_bdev1", 00:07:57.998 "aliases": [ 00:07:57.998 "109d95b1-aab7-4063-90fd-e06e8fb060de" 00:07:57.998 ], 00:07:57.998 "product_name": "Raid Volume", 00:07:57.998 "block_size": 512, 00:07:57.998 "num_blocks": 126976, 00:07:57.998 "uuid": "109d95b1-aab7-4063-90fd-e06e8fb060de", 00:07:57.998 "assigned_rate_limits": { 00:07:57.998 "rw_ios_per_sec": 0, 00:07:57.998 "rw_mbytes_per_sec": 0, 00:07:57.998 "r_mbytes_per_sec": 0, 00:07:57.998 "w_mbytes_per_sec": 0 00:07:57.998 }, 00:07:57.998 "claimed": false, 00:07:57.998 "zoned": false, 00:07:57.998 "supported_io_types": { 00:07:57.998 "read": true, 00:07:57.998 "write": true, 00:07:57.998 "unmap": true, 00:07:57.998 "flush": true, 00:07:57.998 "reset": true, 00:07:57.998 "nvme_admin": false, 00:07:57.998 "nvme_io": false, 00:07:57.998 "nvme_io_md": false, 00:07:57.998 "write_zeroes": true, 00:07:57.998 "zcopy": false, 00:07:57.998 "get_zone_info": false, 00:07:57.998 "zone_management": false, 00:07:57.998 "zone_append": false, 00:07:57.998 "compare": false, 00:07:57.998 "compare_and_write": false, 00:07:57.998 "abort": false, 00:07:57.998 "seek_hole": false, 00:07:57.998 "seek_data": false, 00:07:57.998 "copy": false, 00:07:57.998 "nvme_iov_md": false 00:07:57.998 }, 00:07:57.998 "memory_domains": [ 00:07:57.998 { 00:07:57.998 "dma_device_id": "system", 00:07:57.998 "dma_device_type": 1 00:07:57.998 }, 00:07:57.998 { 00:07:57.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.998 "dma_device_type": 2 00:07:57.998 }, 00:07:57.998 { 00:07:57.998 "dma_device_id": "system", 00:07:57.998 "dma_device_type": 1 00:07:57.999 }, 00:07:57.999 { 00:07:57.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.999 "dma_device_type": 2 00:07:57.999 } 00:07:57.999 ], 00:07:57.999 "driver_specific": { 00:07:57.999 "raid": { 00:07:57.999 "uuid": "109d95b1-aab7-4063-90fd-e06e8fb060de", 00:07:57.999 "strip_size_kb": 64, 00:07:57.999 "state": "online", 00:07:57.999 "raid_level": "concat", 00:07:57.999 "superblock": true, 00:07:57.999 "num_base_bdevs": 2, 00:07:57.999 "num_base_bdevs_discovered": 2, 00:07:57.999 "num_base_bdevs_operational": 2, 00:07:57.999 "base_bdevs_list": [ 00:07:57.999 { 00:07:57.999 "name": "pt1", 00:07:57.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.999 "is_configured": true, 00:07:57.999 "data_offset": 2048, 00:07:57.999 "data_size": 63488 00:07:57.999 }, 00:07:57.999 { 00:07:57.999 "name": "pt2", 00:07:57.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.999 "is_configured": true, 00:07:57.999 "data_offset": 2048, 00:07:57.999 "data_size": 63488 00:07:57.999 } 00:07:57.999 ] 00:07:57.999 } 00:07:57.999 } 00:07:57.999 }' 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.999 pt2' 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.999 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.257 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.257 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.257 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.257 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.258 [2024-11-20 09:20:23.524601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=109d95b1-aab7-4063-90fd-e06e8fb060de 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 109d95b1-aab7-4063-90fd-e06e8fb060de ']' 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.258 [2024-11-20 09:20:23.572215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.258 [2024-11-20 09:20:23.572385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.258 [2024-11-20 09:20:23.572573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.258 [2024-11-20 09:20:23.572642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.258 [2024-11-20 09:20:23.572660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.258 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.518 [2024-11-20 09:20:23.716241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:58.518 [2024-11-20 09:20:23.718575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:58.518 [2024-11-20 09:20:23.718663] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:58.518 [2024-11-20 09:20:23.718727] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:58.518 [2024-11-20 09:20:23.718745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.518 [2024-11-20 09:20:23.718757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:58.518 request: 00:07:58.518 { 00:07:58.518 "name": "raid_bdev1", 00:07:58.518 "raid_level": "concat", 00:07:58.518 "base_bdevs": [ 00:07:58.518 "malloc1", 00:07:58.518 "malloc2" 00:07:58.518 ], 00:07:58.518 "strip_size_kb": 64, 00:07:58.518 "superblock": false, 00:07:58.518 "method": "bdev_raid_create", 00:07:58.518 "req_id": 1 00:07:58.518 } 00:07:58.518 Got JSON-RPC error response 00:07:58.518 response: 00:07:58.518 { 00:07:58.518 "code": -17, 00:07:58.518 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:58.518 } 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.518 [2024-11-20 09:20:23.780100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.518 [2024-11-20 09:20:23.780290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.518 [2024-11-20 09:20:23.780345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:58.518 [2024-11-20 09:20:23.780382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.518 [2024-11-20 09:20:23.782818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.518 [2024-11-20 09:20:23.782935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.518 [2024-11-20 09:20:23.783062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:58.518 [2024-11-20 09:20:23.783169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.518 pt1 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.518 "name": "raid_bdev1", 00:07:58.518 "uuid": "109d95b1-aab7-4063-90fd-e06e8fb060de", 00:07:58.518 "strip_size_kb": 64, 00:07:58.518 "state": "configuring", 00:07:58.518 "raid_level": "concat", 00:07:58.518 "superblock": true, 00:07:58.518 "num_base_bdevs": 2, 00:07:58.518 "num_base_bdevs_discovered": 1, 00:07:58.518 "num_base_bdevs_operational": 2, 00:07:58.518 "base_bdevs_list": [ 00:07:58.518 { 00:07:58.518 "name": "pt1", 00:07:58.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.518 "is_configured": true, 00:07:58.518 "data_offset": 2048, 00:07:58.518 "data_size": 63488 00:07:58.518 }, 00:07:58.518 { 00:07:58.518 "name": null, 00:07:58.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.518 "is_configured": false, 00:07:58.518 "data_offset": 2048, 00:07:58.518 "data_size": 63488 00:07:58.518 } 00:07:58.518 ] 00:07:58.518 }' 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.518 09:20:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.086 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.087 [2024-11-20 09:20:24.283290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:59.087 [2024-11-20 09:20:24.283398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.087 [2024-11-20 09:20:24.283421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:59.087 [2024-11-20 09:20:24.283448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.087 [2024-11-20 09:20:24.283972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.087 [2024-11-20 09:20:24.284007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:59.087 [2024-11-20 09:20:24.284114] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:59.087 [2024-11-20 09:20:24.284154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.087 [2024-11-20 09:20:24.284281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:59.087 [2024-11-20 09:20:24.284296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:59.087 [2024-11-20 09:20:24.284580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:59.087 [2024-11-20 09:20:24.284751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:59.087 [2024-11-20 09:20:24.284762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:59.087 [2024-11-20 09:20:24.284916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.087 pt2 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.087 "name": "raid_bdev1", 00:07:59.087 "uuid": "109d95b1-aab7-4063-90fd-e06e8fb060de", 00:07:59.087 "strip_size_kb": 64, 00:07:59.087 "state": "online", 00:07:59.087 "raid_level": "concat", 00:07:59.087 "superblock": true, 00:07:59.087 "num_base_bdevs": 2, 00:07:59.087 "num_base_bdevs_discovered": 2, 00:07:59.087 "num_base_bdevs_operational": 2, 00:07:59.087 "base_bdevs_list": [ 00:07:59.087 { 00:07:59.087 "name": "pt1", 00:07:59.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.087 "is_configured": true, 00:07:59.087 "data_offset": 2048, 00:07:59.087 "data_size": 63488 00:07:59.087 }, 00:07:59.087 { 00:07:59.087 "name": "pt2", 00:07:59.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.087 "is_configured": true, 00:07:59.087 "data_offset": 2048, 00:07:59.087 "data_size": 63488 00:07:59.087 } 00:07:59.087 ] 00:07:59.087 }' 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.087 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.347 [2024-11-20 09:20:24.774739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.347 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.606 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.606 "name": "raid_bdev1", 00:07:59.606 "aliases": [ 00:07:59.606 "109d95b1-aab7-4063-90fd-e06e8fb060de" 00:07:59.606 ], 00:07:59.606 "product_name": "Raid Volume", 00:07:59.606 "block_size": 512, 00:07:59.606 "num_blocks": 126976, 00:07:59.606 "uuid": "109d95b1-aab7-4063-90fd-e06e8fb060de", 00:07:59.606 "assigned_rate_limits": { 00:07:59.606 "rw_ios_per_sec": 0, 00:07:59.606 "rw_mbytes_per_sec": 0, 00:07:59.607 "r_mbytes_per_sec": 0, 00:07:59.607 "w_mbytes_per_sec": 0 00:07:59.607 }, 00:07:59.607 "claimed": false, 00:07:59.607 "zoned": false, 00:07:59.607 "supported_io_types": { 00:07:59.607 "read": true, 00:07:59.607 "write": true, 00:07:59.607 "unmap": true, 00:07:59.607 "flush": true, 00:07:59.607 "reset": true, 00:07:59.607 "nvme_admin": false, 00:07:59.607 "nvme_io": false, 00:07:59.607 "nvme_io_md": false, 00:07:59.607 "write_zeroes": true, 00:07:59.607 "zcopy": false, 00:07:59.607 "get_zone_info": false, 00:07:59.607 "zone_management": false, 00:07:59.607 "zone_append": false, 00:07:59.607 "compare": false, 00:07:59.607 "compare_and_write": false, 00:07:59.607 "abort": false, 00:07:59.607 "seek_hole": false, 00:07:59.607 "seek_data": false, 00:07:59.607 "copy": false, 00:07:59.607 "nvme_iov_md": false 00:07:59.607 }, 00:07:59.607 "memory_domains": [ 00:07:59.607 { 00:07:59.607 "dma_device_id": "system", 00:07:59.607 "dma_device_type": 1 00:07:59.607 }, 00:07:59.607 { 00:07:59.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.607 "dma_device_type": 2 00:07:59.607 }, 00:07:59.607 { 00:07:59.607 "dma_device_id": "system", 00:07:59.607 "dma_device_type": 1 00:07:59.607 }, 00:07:59.607 { 00:07:59.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.607 "dma_device_type": 2 00:07:59.607 } 00:07:59.607 ], 00:07:59.607 "driver_specific": { 00:07:59.607 "raid": { 00:07:59.607 "uuid": "109d95b1-aab7-4063-90fd-e06e8fb060de", 00:07:59.607 "strip_size_kb": 64, 00:07:59.607 "state": "online", 00:07:59.607 "raid_level": "concat", 00:07:59.607 "superblock": true, 00:07:59.607 "num_base_bdevs": 2, 00:07:59.607 "num_base_bdevs_discovered": 2, 00:07:59.607 "num_base_bdevs_operational": 2, 00:07:59.607 "base_bdevs_list": [ 00:07:59.607 { 00:07:59.607 "name": "pt1", 00:07:59.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.607 "is_configured": true, 00:07:59.607 "data_offset": 2048, 00:07:59.607 "data_size": 63488 00:07:59.607 }, 00:07:59.607 { 00:07:59.607 "name": "pt2", 00:07:59.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.607 "is_configured": true, 00:07:59.607 "data_offset": 2048, 00:07:59.607 "data_size": 63488 00:07:59.607 } 00:07:59.607 ] 00:07:59.607 } 00:07:59.607 } 00:07:59.607 }' 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:59.607 pt2' 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.607 09:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.607 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.607 09:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.607 09:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.607 09:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.607 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.607 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.607 09:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:59.607 [2024-11-20 09:20:25.046290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.607 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 109d95b1-aab7-4063-90fd-e06e8fb060de '!=' 109d95b1-aab7-4063-90fd-e06e8fb060de ']' 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62392 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62392 ']' 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62392 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62392 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62392' 00:07:59.867 killing process with pid 62392 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62392 00:07:59.867 [2024-11-20 09:20:25.137637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.867 09:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62392 00:07:59.867 [2024-11-20 09:20:25.137905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.867 [2024-11-20 09:20:25.138023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.867 [2024-11-20 09:20:25.138086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:00.127 [2024-11-20 09:20:25.381918] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.506 09:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:01.506 00:08:01.506 real 0m5.033s 00:08:01.506 user 0m7.045s 00:08:01.507 sys 0m0.897s 00:08:01.507 09:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.507 09:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.507 ************************************ 00:08:01.507 END TEST raid_superblock_test 00:08:01.507 ************************************ 00:08:01.507 09:20:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:01.507 09:20:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.507 09:20:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.507 09:20:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.507 ************************************ 00:08:01.507 START TEST raid_read_error_test 00:08:01.507 ************************************ 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uBeROHkMv6 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62609 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62609 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62609 ']' 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.507 09:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.507 [2024-11-20 09:20:26.835315] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:08:01.507 [2024-11-20 09:20:26.835619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62609 ] 00:08:01.767 [2024-11-20 09:20:27.019178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.767 [2024-11-20 09:20:27.157127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.026 [2024-11-20 09:20:27.403348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.026 [2024-11-20 09:20:27.403404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.596 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.596 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.597 BaseBdev1_malloc 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.597 true 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.597 [2024-11-20 09:20:27.817656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.597 [2024-11-20 09:20:27.817859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.597 [2024-11-20 09:20:27.817892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:02.597 [2024-11-20 09:20:27.817907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.597 [2024-11-20 09:20:27.820515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.597 [2024-11-20 09:20:27.820577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.597 BaseBdev1 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.597 BaseBdev2_malloc 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.597 true 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.597 [2024-11-20 09:20:27.891832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:02.597 [2024-11-20 09:20:27.891931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.597 [2024-11-20 09:20:27.891955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:02.597 [2024-11-20 09:20:27.891970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.597 [2024-11-20 09:20:27.894624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.597 [2024-11-20 09:20:27.894792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:02.597 BaseBdev2 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.597 [2024-11-20 09:20:27.903893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.597 [2024-11-20 09:20:27.906082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.597 [2024-11-20 09:20:27.906333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.597 [2024-11-20 09:20:27.906352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:02.597 [2024-11-20 09:20:27.906834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:02.597 [2024-11-20 09:20:27.907120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.597 [2024-11-20 09:20:27.907190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:02.597 [2024-11-20 09:20:27.907487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.597 "name": "raid_bdev1", 00:08:02.597 "uuid": "23031363-be3e-47c0-b03b-8a729c483ff3", 00:08:02.597 "strip_size_kb": 64, 00:08:02.597 "state": "online", 00:08:02.597 "raid_level": "concat", 00:08:02.597 "superblock": true, 00:08:02.597 "num_base_bdevs": 2, 00:08:02.597 "num_base_bdevs_discovered": 2, 00:08:02.597 "num_base_bdevs_operational": 2, 00:08:02.597 "base_bdevs_list": [ 00:08:02.597 { 00:08:02.597 "name": "BaseBdev1", 00:08:02.597 "uuid": "d461af5b-b504-586a-a837-5c7ceccd2104", 00:08:02.597 "is_configured": true, 00:08:02.597 "data_offset": 2048, 00:08:02.597 "data_size": 63488 00:08:02.597 }, 00:08:02.597 { 00:08:02.597 "name": "BaseBdev2", 00:08:02.597 "uuid": "bdd0956f-c1b0-56ac-a8ba-52d299271255", 00:08:02.597 "is_configured": true, 00:08:02.597 "data_offset": 2048, 00:08:02.597 "data_size": 63488 00:08:02.597 } 00:08:02.597 ] 00:08:02.597 }' 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.597 09:20:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.166 09:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.166 09:20:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.166 [2024-11-20 09:20:28.428620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:04.105 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.106 "name": "raid_bdev1", 00:08:04.106 "uuid": "23031363-be3e-47c0-b03b-8a729c483ff3", 00:08:04.106 "strip_size_kb": 64, 00:08:04.106 "state": "online", 00:08:04.106 "raid_level": "concat", 00:08:04.106 "superblock": true, 00:08:04.106 "num_base_bdevs": 2, 00:08:04.106 "num_base_bdevs_discovered": 2, 00:08:04.106 "num_base_bdevs_operational": 2, 00:08:04.106 "base_bdevs_list": [ 00:08:04.106 { 00:08:04.106 "name": "BaseBdev1", 00:08:04.106 "uuid": "d461af5b-b504-586a-a837-5c7ceccd2104", 00:08:04.106 "is_configured": true, 00:08:04.106 "data_offset": 2048, 00:08:04.106 "data_size": 63488 00:08:04.106 }, 00:08:04.106 { 00:08:04.106 "name": "BaseBdev2", 00:08:04.106 "uuid": "bdd0956f-c1b0-56ac-a8ba-52d299271255", 00:08:04.106 "is_configured": true, 00:08:04.106 "data_offset": 2048, 00:08:04.106 "data_size": 63488 00:08:04.106 } 00:08:04.106 ] 00:08:04.106 }' 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.106 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.365 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.365 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.365 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.365 [2024-11-20 09:20:29.797862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.365 [2024-11-20 09:20:29.798026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.365 [2024-11-20 09:20:29.801273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.365 [2024-11-20 09:20:29.801393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.365 [2024-11-20 09:20:29.801466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.365 [2024-11-20 09:20:29.801528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.365 { 00:08:04.365 "results": [ 00:08:04.365 { 00:08:04.365 "job": "raid_bdev1", 00:08:04.365 "core_mask": "0x1", 00:08:04.365 "workload": "randrw", 00:08:04.365 "percentage": 50, 00:08:04.365 "status": "finished", 00:08:04.365 "queue_depth": 1, 00:08:04.365 "io_size": 131072, 00:08:04.365 "runtime": 1.36986, 00:08:04.365 "iops": 13094.768808491379, 00:08:04.365 "mibps": 1636.8461010614224, 00:08:04.365 "io_failed": 1, 00:08:04.365 "io_timeout": 0, 00:08:04.365 "avg_latency_us": 106.02549162847116, 00:08:04.365 "min_latency_us": 27.94759825327511, 00:08:04.365 "max_latency_us": 1781.4917030567685 00:08:04.365 } 00:08:04.365 ], 00:08:04.365 "core_count": 1 00:08:04.365 } 00:08:04.366 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.366 09:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62609 00:08:04.366 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62609 ']' 00:08:04.366 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62609 00:08:04.366 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:04.366 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.366 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62609 00:08:04.625 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.625 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.625 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62609' 00:08:04.625 killing process with pid 62609 00:08:04.625 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62609 00:08:04.625 [2024-11-20 09:20:29.850129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.625 09:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62609 00:08:04.625 [2024-11-20 09:20:30.004019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uBeROHkMv6 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:06.058 ************************************ 00:08:06.058 END TEST raid_read_error_test 00:08:06.058 ************************************ 00:08:06.058 00:08:06.058 real 0m4.721s 00:08:06.058 user 0m5.626s 00:08:06.058 sys 0m0.605s 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.058 09:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.058 09:20:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:06.058 09:20:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.058 09:20:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.058 09:20:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.058 ************************************ 00:08:06.058 START TEST raid_write_error_test 00:08:06.058 ************************************ 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.058 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.059 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.059 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.059 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.059 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.059 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:06.059 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:06.059 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xphPANDzUM 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62755 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62755 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62755 ']' 00:08:06.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.318 09:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.318 [2024-11-20 09:20:31.620511] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:08:06.318 [2024-11-20 09:20:31.620779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62755 ] 00:08:06.578 [2024-11-20 09:20:31.790239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.578 [2024-11-20 09:20:31.973802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.838 [2024-11-20 09:20:32.253795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.838 [2024-11-20 09:20:32.253867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.097 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.097 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.097 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.097 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:07.097 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.097 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.357 BaseBdev1_malloc 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.357 true 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.357 [2024-11-20 09:20:32.612933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:07.357 [2024-11-20 09:20:32.613170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.357 [2024-11-20 09:20:32.613214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:07.357 [2024-11-20 09:20:32.613232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.357 [2024-11-20 09:20:32.616469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.357 [2024-11-20 09:20:32.616532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:07.357 BaseBdev1 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.357 BaseBdev2_malloc 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.357 true 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.357 [2024-11-20 09:20:32.693564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:07.357 [2024-11-20 09:20:32.693653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.357 [2024-11-20 09:20:32.693679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:07.357 [2024-11-20 09:20:32.693692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.357 [2024-11-20 09:20:32.696667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.357 [2024-11-20 09:20:32.696727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:07.357 BaseBdev2 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.357 [2024-11-20 09:20:32.705755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.357 [2024-11-20 09:20:32.708390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.357 [2024-11-20 09:20:32.708675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.357 [2024-11-20 09:20:32.708708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.357 [2024-11-20 09:20:32.709081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:07.357 [2024-11-20 09:20:32.709338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.357 [2024-11-20 09:20:32.709354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:07.357 [2024-11-20 09:20:32.709766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.357 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.358 "name": "raid_bdev1", 00:08:07.358 "uuid": "0e02936f-eb8d-4d30-a6db-b69b60fb1121", 00:08:07.358 "strip_size_kb": 64, 00:08:07.358 "state": "online", 00:08:07.358 "raid_level": "concat", 00:08:07.358 "superblock": true, 00:08:07.358 "num_base_bdevs": 2, 00:08:07.358 "num_base_bdevs_discovered": 2, 00:08:07.358 "num_base_bdevs_operational": 2, 00:08:07.358 "base_bdevs_list": [ 00:08:07.358 { 00:08:07.358 "name": "BaseBdev1", 00:08:07.358 "uuid": "2d184b0d-7acc-5727-b070-7e5f00f8d9cd", 00:08:07.358 "is_configured": true, 00:08:07.358 "data_offset": 2048, 00:08:07.358 "data_size": 63488 00:08:07.358 }, 00:08:07.358 { 00:08:07.358 "name": "BaseBdev2", 00:08:07.358 "uuid": "256880f8-c8b5-584b-a508-2ffa24e4519d", 00:08:07.358 "is_configured": true, 00:08:07.358 "data_offset": 2048, 00:08:07.358 "data_size": 63488 00:08:07.358 } 00:08:07.358 ] 00:08:07.358 }' 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.358 09:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.926 09:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.926 09:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.926 [2024-11-20 09:20:33.278482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.863 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.864 "name": "raid_bdev1", 00:08:08.864 "uuid": "0e02936f-eb8d-4d30-a6db-b69b60fb1121", 00:08:08.864 "strip_size_kb": 64, 00:08:08.864 "state": "online", 00:08:08.864 "raid_level": "concat", 00:08:08.864 "superblock": true, 00:08:08.864 "num_base_bdevs": 2, 00:08:08.864 "num_base_bdevs_discovered": 2, 00:08:08.864 "num_base_bdevs_operational": 2, 00:08:08.864 "base_bdevs_list": [ 00:08:08.864 { 00:08:08.864 "name": "BaseBdev1", 00:08:08.864 "uuid": "2d184b0d-7acc-5727-b070-7e5f00f8d9cd", 00:08:08.864 "is_configured": true, 00:08:08.864 "data_offset": 2048, 00:08:08.864 "data_size": 63488 00:08:08.864 }, 00:08:08.864 { 00:08:08.864 "name": "BaseBdev2", 00:08:08.864 "uuid": "256880f8-c8b5-584b-a508-2ffa24e4519d", 00:08:08.864 "is_configured": true, 00:08:08.864 "data_offset": 2048, 00:08:08.864 "data_size": 63488 00:08:08.864 } 00:08:08.864 ] 00:08:08.864 }' 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.864 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.447 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.447 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.447 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.447 [2024-11-20 09:20:34.624577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.447 [2024-11-20 09:20:34.624707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.447 [2024-11-20 09:20:34.628058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.447 [2024-11-20 09:20:34.628175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.447 [2024-11-20 09:20:34.628228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.447 [2024-11-20 09:20:34.628247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.447 { 00:08:09.447 "results": [ 00:08:09.447 { 00:08:09.447 "job": "raid_bdev1", 00:08:09.447 "core_mask": "0x1", 00:08:09.447 "workload": "randrw", 00:08:09.448 "percentage": 50, 00:08:09.448 "status": "finished", 00:08:09.448 "queue_depth": 1, 00:08:09.448 "io_size": 131072, 00:08:09.448 "runtime": 1.346284, 00:08:09.448 "iops": 12487.706902852593, 00:08:09.448 "mibps": 1560.9633628565741, 00:08:09.448 "io_failed": 1, 00:08:09.448 "io_timeout": 0, 00:08:09.448 "avg_latency_us": 112.56593761793289, 00:08:09.448 "min_latency_us": 27.94759825327511, 00:08:09.448 "max_latency_us": 1688.482096069869 00:08:09.448 } 00:08:09.448 ], 00:08:09.448 "core_count": 1 00:08:09.448 } 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62755 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62755 ']' 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62755 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62755 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.448 killing process with pid 62755 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62755' 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62755 00:08:09.448 [2024-11-20 09:20:34.671560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.448 09:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62755 00:08:09.448 [2024-11-20 09:20:34.833806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.823 09:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.823 09:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xphPANDzUM 00:08:11.082 09:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:11.082 09:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:11.082 09:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:11.082 09:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.082 09:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.082 09:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:11.082 00:08:11.082 real 0m4.787s 00:08:11.082 user 0m5.602s 00:08:11.082 sys 0m0.713s 00:08:11.082 ************************************ 00:08:11.082 END TEST raid_write_error_test 00:08:11.082 ************************************ 00:08:11.082 09:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.082 09:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.082 09:20:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:11.082 09:20:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:11.082 09:20:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:11.082 09:20:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.082 09:20:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.082 ************************************ 00:08:11.082 START TEST raid_state_function_test 00:08:11.082 ************************************ 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62904 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62904' 00:08:11.082 Process raid pid: 62904 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62904 00:08:11.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62904 ']' 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.082 09:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.082 [2024-11-20 09:20:36.479829] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:08:11.082 [2024-11-20 09:20:36.479987] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.341 [2024-11-20 09:20:36.653206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.599 [2024-11-20 09:20:36.808497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.858 [2024-11-20 09:20:37.081282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.858 [2024-11-20 09:20:37.081356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.117 [2024-11-20 09:20:37.436930] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.117 [2024-11-20 09:20:37.437088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.117 [2024-11-20 09:20:37.437107] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.117 [2024-11-20 09:20:37.437121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.117 "name": "Existed_Raid", 00:08:12.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.117 "strip_size_kb": 0, 00:08:12.117 "state": "configuring", 00:08:12.117 "raid_level": "raid1", 00:08:12.117 "superblock": false, 00:08:12.117 "num_base_bdevs": 2, 00:08:12.117 "num_base_bdevs_discovered": 0, 00:08:12.117 "num_base_bdevs_operational": 2, 00:08:12.117 "base_bdevs_list": [ 00:08:12.117 { 00:08:12.117 "name": "BaseBdev1", 00:08:12.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.117 "is_configured": false, 00:08:12.117 "data_offset": 0, 00:08:12.117 "data_size": 0 00:08:12.117 }, 00:08:12.117 { 00:08:12.117 "name": "BaseBdev2", 00:08:12.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.117 "is_configured": false, 00:08:12.117 "data_offset": 0, 00:08:12.117 "data_size": 0 00:08:12.117 } 00:08:12.117 ] 00:08:12.117 }' 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.117 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.684 [2024-11-20 09:20:37.920175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.684 [2024-11-20 09:20:37.920332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.684 [2024-11-20 09:20:37.932153] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.684 [2024-11-20 09:20:37.932306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.684 [2024-11-20 09:20:37.932346] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.684 [2024-11-20 09:20:37.932389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.684 [2024-11-20 09:20:37.995703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.684 BaseBdev1 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.684 09:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.684 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.684 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:12.684 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.684 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.684 [ 00:08:12.684 { 00:08:12.684 "name": "BaseBdev1", 00:08:12.684 "aliases": [ 00:08:12.684 "795d0548-cbbe-4cfa-be32-dedf3b338738" 00:08:12.684 ], 00:08:12.684 "product_name": "Malloc disk", 00:08:12.684 "block_size": 512, 00:08:12.684 "num_blocks": 65536, 00:08:12.684 "uuid": "795d0548-cbbe-4cfa-be32-dedf3b338738", 00:08:12.684 "assigned_rate_limits": { 00:08:12.684 "rw_ios_per_sec": 0, 00:08:12.684 "rw_mbytes_per_sec": 0, 00:08:12.684 "r_mbytes_per_sec": 0, 00:08:12.684 "w_mbytes_per_sec": 0 00:08:12.684 }, 00:08:12.684 "claimed": true, 00:08:12.684 "claim_type": "exclusive_write", 00:08:12.684 "zoned": false, 00:08:12.684 "supported_io_types": { 00:08:12.684 "read": true, 00:08:12.684 "write": true, 00:08:12.684 "unmap": true, 00:08:12.684 "flush": true, 00:08:12.684 "reset": true, 00:08:12.684 "nvme_admin": false, 00:08:12.684 "nvme_io": false, 00:08:12.684 "nvme_io_md": false, 00:08:12.684 "write_zeroes": true, 00:08:12.684 "zcopy": true, 00:08:12.684 "get_zone_info": false, 00:08:12.684 "zone_management": false, 00:08:12.684 "zone_append": false, 00:08:12.684 "compare": false, 00:08:12.684 "compare_and_write": false, 00:08:12.684 "abort": true, 00:08:12.684 "seek_hole": false, 00:08:12.684 "seek_data": false, 00:08:12.684 "copy": true, 00:08:12.684 "nvme_iov_md": false 00:08:12.684 }, 00:08:12.684 "memory_domains": [ 00:08:12.684 { 00:08:12.684 "dma_device_id": "system", 00:08:12.684 "dma_device_type": 1 00:08:12.684 }, 00:08:12.684 { 00:08:12.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.684 "dma_device_type": 2 00:08:12.684 } 00:08:12.684 ], 00:08:12.684 "driver_specific": {} 00:08:12.684 } 00:08:12.684 ] 00:08:12.684 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.684 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.685 "name": "Existed_Raid", 00:08:12.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.685 "strip_size_kb": 0, 00:08:12.685 "state": "configuring", 00:08:12.685 "raid_level": "raid1", 00:08:12.685 "superblock": false, 00:08:12.685 "num_base_bdevs": 2, 00:08:12.685 "num_base_bdevs_discovered": 1, 00:08:12.685 "num_base_bdevs_operational": 2, 00:08:12.685 "base_bdevs_list": [ 00:08:12.685 { 00:08:12.685 "name": "BaseBdev1", 00:08:12.685 "uuid": "795d0548-cbbe-4cfa-be32-dedf3b338738", 00:08:12.685 "is_configured": true, 00:08:12.685 "data_offset": 0, 00:08:12.685 "data_size": 65536 00:08:12.685 }, 00:08:12.685 { 00:08:12.685 "name": "BaseBdev2", 00:08:12.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.685 "is_configured": false, 00:08:12.685 "data_offset": 0, 00:08:12.685 "data_size": 0 00:08:12.685 } 00:08:12.685 ] 00:08:12.685 }' 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.685 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.253 [2024-11-20 09:20:38.526873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.253 [2024-11-20 09:20:38.527060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.253 [2024-11-20 09:20:38.538968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.253 [2024-11-20 09:20:38.541711] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.253 [2024-11-20 09:20:38.541864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.253 "name": "Existed_Raid", 00:08:13.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.253 "strip_size_kb": 0, 00:08:13.253 "state": "configuring", 00:08:13.253 "raid_level": "raid1", 00:08:13.253 "superblock": false, 00:08:13.253 "num_base_bdevs": 2, 00:08:13.253 "num_base_bdevs_discovered": 1, 00:08:13.253 "num_base_bdevs_operational": 2, 00:08:13.253 "base_bdevs_list": [ 00:08:13.253 { 00:08:13.253 "name": "BaseBdev1", 00:08:13.253 "uuid": "795d0548-cbbe-4cfa-be32-dedf3b338738", 00:08:13.253 "is_configured": true, 00:08:13.253 "data_offset": 0, 00:08:13.253 "data_size": 65536 00:08:13.253 }, 00:08:13.253 { 00:08:13.253 "name": "BaseBdev2", 00:08:13.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.253 "is_configured": false, 00:08:13.253 "data_offset": 0, 00:08:13.253 "data_size": 0 00:08:13.253 } 00:08:13.253 ] 00:08:13.253 }' 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.253 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.822 09:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:13.822 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.822 09:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.822 [2024-11-20 09:20:39.041136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.822 [2024-11-20 09:20:39.041222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:13.822 [2024-11-20 09:20:39.041232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:13.822 [2024-11-20 09:20:39.041614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:13.822 [2024-11-20 09:20:39.041832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:13.822 [2024-11-20 09:20:39.041851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:13.822 [2024-11-20 09:20:39.042275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.822 BaseBdev2 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.822 [ 00:08:13.822 { 00:08:13.822 "name": "BaseBdev2", 00:08:13.822 "aliases": [ 00:08:13.822 "5c27105a-dc78-4375-acc3-349fcdc12be8" 00:08:13.822 ], 00:08:13.822 "product_name": "Malloc disk", 00:08:13.822 "block_size": 512, 00:08:13.822 "num_blocks": 65536, 00:08:13.822 "uuid": "5c27105a-dc78-4375-acc3-349fcdc12be8", 00:08:13.822 "assigned_rate_limits": { 00:08:13.822 "rw_ios_per_sec": 0, 00:08:13.822 "rw_mbytes_per_sec": 0, 00:08:13.822 "r_mbytes_per_sec": 0, 00:08:13.822 "w_mbytes_per_sec": 0 00:08:13.822 }, 00:08:13.822 "claimed": true, 00:08:13.822 "claim_type": "exclusive_write", 00:08:13.822 "zoned": false, 00:08:13.822 "supported_io_types": { 00:08:13.822 "read": true, 00:08:13.822 "write": true, 00:08:13.822 "unmap": true, 00:08:13.822 "flush": true, 00:08:13.822 "reset": true, 00:08:13.822 "nvme_admin": false, 00:08:13.822 "nvme_io": false, 00:08:13.822 "nvme_io_md": false, 00:08:13.822 "write_zeroes": true, 00:08:13.822 "zcopy": true, 00:08:13.822 "get_zone_info": false, 00:08:13.822 "zone_management": false, 00:08:13.822 "zone_append": false, 00:08:13.822 "compare": false, 00:08:13.822 "compare_and_write": false, 00:08:13.822 "abort": true, 00:08:13.822 "seek_hole": false, 00:08:13.822 "seek_data": false, 00:08:13.822 "copy": true, 00:08:13.822 "nvme_iov_md": false 00:08:13.822 }, 00:08:13.822 "memory_domains": [ 00:08:13.822 { 00:08:13.822 "dma_device_id": "system", 00:08:13.822 "dma_device_type": 1 00:08:13.822 }, 00:08:13.822 { 00:08:13.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.822 "dma_device_type": 2 00:08:13.822 } 00:08:13.822 ], 00:08:13.822 "driver_specific": {} 00:08:13.822 } 00:08:13.822 ] 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.822 "name": "Existed_Raid", 00:08:13.822 "uuid": "1b8c9a54-5c1d-4bcc-a57c-36bb47c5cb01", 00:08:13.822 "strip_size_kb": 0, 00:08:13.822 "state": "online", 00:08:13.822 "raid_level": "raid1", 00:08:13.822 "superblock": false, 00:08:13.822 "num_base_bdevs": 2, 00:08:13.822 "num_base_bdevs_discovered": 2, 00:08:13.822 "num_base_bdevs_operational": 2, 00:08:13.822 "base_bdevs_list": [ 00:08:13.822 { 00:08:13.822 "name": "BaseBdev1", 00:08:13.822 "uuid": "795d0548-cbbe-4cfa-be32-dedf3b338738", 00:08:13.822 "is_configured": true, 00:08:13.822 "data_offset": 0, 00:08:13.822 "data_size": 65536 00:08:13.822 }, 00:08:13.822 { 00:08:13.822 "name": "BaseBdev2", 00:08:13.822 "uuid": "5c27105a-dc78-4375-acc3-349fcdc12be8", 00:08:13.822 "is_configured": true, 00:08:13.822 "data_offset": 0, 00:08:13.822 "data_size": 65536 00:08:13.822 } 00:08:13.822 ] 00:08:13.822 }' 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.822 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.388 [2024-11-20 09:20:39.584778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.388 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.388 "name": "Existed_Raid", 00:08:14.388 "aliases": [ 00:08:14.388 "1b8c9a54-5c1d-4bcc-a57c-36bb47c5cb01" 00:08:14.388 ], 00:08:14.388 "product_name": "Raid Volume", 00:08:14.388 "block_size": 512, 00:08:14.388 "num_blocks": 65536, 00:08:14.388 "uuid": "1b8c9a54-5c1d-4bcc-a57c-36bb47c5cb01", 00:08:14.388 "assigned_rate_limits": { 00:08:14.388 "rw_ios_per_sec": 0, 00:08:14.388 "rw_mbytes_per_sec": 0, 00:08:14.388 "r_mbytes_per_sec": 0, 00:08:14.388 "w_mbytes_per_sec": 0 00:08:14.388 }, 00:08:14.388 "claimed": false, 00:08:14.388 "zoned": false, 00:08:14.388 "supported_io_types": { 00:08:14.388 "read": true, 00:08:14.388 "write": true, 00:08:14.388 "unmap": false, 00:08:14.388 "flush": false, 00:08:14.388 "reset": true, 00:08:14.388 "nvme_admin": false, 00:08:14.388 "nvme_io": false, 00:08:14.389 "nvme_io_md": false, 00:08:14.389 "write_zeroes": true, 00:08:14.389 "zcopy": false, 00:08:14.389 "get_zone_info": false, 00:08:14.389 "zone_management": false, 00:08:14.389 "zone_append": false, 00:08:14.389 "compare": false, 00:08:14.389 "compare_and_write": false, 00:08:14.389 "abort": false, 00:08:14.389 "seek_hole": false, 00:08:14.389 "seek_data": false, 00:08:14.389 "copy": false, 00:08:14.389 "nvme_iov_md": false 00:08:14.389 }, 00:08:14.389 "memory_domains": [ 00:08:14.389 { 00:08:14.389 "dma_device_id": "system", 00:08:14.389 "dma_device_type": 1 00:08:14.389 }, 00:08:14.389 { 00:08:14.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.389 "dma_device_type": 2 00:08:14.389 }, 00:08:14.389 { 00:08:14.389 "dma_device_id": "system", 00:08:14.389 "dma_device_type": 1 00:08:14.389 }, 00:08:14.389 { 00:08:14.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.389 "dma_device_type": 2 00:08:14.389 } 00:08:14.389 ], 00:08:14.389 "driver_specific": { 00:08:14.389 "raid": { 00:08:14.389 "uuid": "1b8c9a54-5c1d-4bcc-a57c-36bb47c5cb01", 00:08:14.389 "strip_size_kb": 0, 00:08:14.389 "state": "online", 00:08:14.389 "raid_level": "raid1", 00:08:14.389 "superblock": false, 00:08:14.389 "num_base_bdevs": 2, 00:08:14.389 "num_base_bdevs_discovered": 2, 00:08:14.389 "num_base_bdevs_operational": 2, 00:08:14.389 "base_bdevs_list": [ 00:08:14.389 { 00:08:14.389 "name": "BaseBdev1", 00:08:14.389 "uuid": "795d0548-cbbe-4cfa-be32-dedf3b338738", 00:08:14.389 "is_configured": true, 00:08:14.389 "data_offset": 0, 00:08:14.389 "data_size": 65536 00:08:14.389 }, 00:08:14.389 { 00:08:14.389 "name": "BaseBdev2", 00:08:14.389 "uuid": "5c27105a-dc78-4375-acc3-349fcdc12be8", 00:08:14.389 "is_configured": true, 00:08:14.389 "data_offset": 0, 00:08:14.389 "data_size": 65536 00:08:14.389 } 00:08:14.389 ] 00:08:14.389 } 00:08:14.389 } 00:08:14.389 }' 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:14.389 BaseBdev2' 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.389 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.389 [2024-11-20 09:20:39.804188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:14.647 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.648 "name": "Existed_Raid", 00:08:14.648 "uuid": "1b8c9a54-5c1d-4bcc-a57c-36bb47c5cb01", 00:08:14.648 "strip_size_kb": 0, 00:08:14.648 "state": "online", 00:08:14.648 "raid_level": "raid1", 00:08:14.648 "superblock": false, 00:08:14.648 "num_base_bdevs": 2, 00:08:14.648 "num_base_bdevs_discovered": 1, 00:08:14.648 "num_base_bdevs_operational": 1, 00:08:14.648 "base_bdevs_list": [ 00:08:14.648 { 00:08:14.648 "name": null, 00:08:14.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.648 "is_configured": false, 00:08:14.648 "data_offset": 0, 00:08:14.648 "data_size": 65536 00:08:14.648 }, 00:08:14.648 { 00:08:14.648 "name": "BaseBdev2", 00:08:14.648 "uuid": "5c27105a-dc78-4375-acc3-349fcdc12be8", 00:08:14.648 "is_configured": true, 00:08:14.648 "data_offset": 0, 00:08:14.648 "data_size": 65536 00:08:14.648 } 00:08:14.648 ] 00:08:14.648 }' 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.648 09:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.215 [2024-11-20 09:20:40.473517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:15.215 [2024-11-20 09:20:40.473757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.215 [2024-11-20 09:20:40.594819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.215 [2024-11-20 09:20:40.595007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.215 [2024-11-20 09:20:40.595056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62904 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62904 ']' 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62904 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.215 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62904 00:08:15.474 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.474 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.474 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62904' 00:08:15.474 killing process with pid 62904 00:08:15.474 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62904 00:08:15.474 [2024-11-20 09:20:40.681866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.474 09:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62904 00:08:15.474 [2024-11-20 09:20:40.703690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:16.857 00:08:16.857 real 0m5.730s 00:08:16.857 user 0m8.057s 00:08:16.857 sys 0m1.057s 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.857 ************************************ 00:08:16.857 END TEST raid_state_function_test 00:08:16.857 ************************************ 00:08:16.857 09:20:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:16.857 09:20:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:16.857 09:20:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.857 09:20:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.857 ************************************ 00:08:16.857 START TEST raid_state_function_test_sb 00:08:16.857 ************************************ 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:16.857 Process raid pid: 63157 00:08:16.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63157 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63157' 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63157 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63157 ']' 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.857 09:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.857 [2024-11-20 09:20:42.280846] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:08:16.857 [2024-11-20 09:20:42.281122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.117 [2024-11-20 09:20:42.465012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.376 [2024-11-20 09:20:42.624831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.635 [2024-11-20 09:20:42.902428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.635 [2024-11-20 09:20:42.902626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.895 [2024-11-20 09:20:43.227281] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.895 [2024-11-20 09:20:43.227478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.895 [2024-11-20 09:20:43.227527] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.895 [2024-11-20 09:20:43.227559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.895 "name": "Existed_Raid", 00:08:17.895 "uuid": "4386f75f-7a8b-47db-b316-4693b02f3e10", 00:08:17.895 "strip_size_kb": 0, 00:08:17.895 "state": "configuring", 00:08:17.895 "raid_level": "raid1", 00:08:17.895 "superblock": true, 00:08:17.895 "num_base_bdevs": 2, 00:08:17.895 "num_base_bdevs_discovered": 0, 00:08:17.895 "num_base_bdevs_operational": 2, 00:08:17.895 "base_bdevs_list": [ 00:08:17.895 { 00:08:17.895 "name": "BaseBdev1", 00:08:17.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.895 "is_configured": false, 00:08:17.895 "data_offset": 0, 00:08:17.895 "data_size": 0 00:08:17.895 }, 00:08:17.895 { 00:08:17.895 "name": "BaseBdev2", 00:08:17.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.895 "is_configured": false, 00:08:17.895 "data_offset": 0, 00:08:17.895 "data_size": 0 00:08:17.895 } 00:08:17.895 ] 00:08:17.895 }' 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.895 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 [2024-11-20 09:20:43.738507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.463 [2024-11-20 09:20:43.738570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 [2024-11-20 09:20:43.750509] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.463 [2024-11-20 09:20:43.750585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.463 [2024-11-20 09:20:43.750598] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.463 [2024-11-20 09:20:43.750615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 [2024-11-20 09:20:43.814065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.463 BaseBdev1 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.463 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 [ 00:08:18.463 { 00:08:18.463 "name": "BaseBdev1", 00:08:18.463 "aliases": [ 00:08:18.463 "c559dce7-76ac-4777-b665-a12b33ac8a7c" 00:08:18.463 ], 00:08:18.463 "product_name": "Malloc disk", 00:08:18.463 "block_size": 512, 00:08:18.463 "num_blocks": 65536, 00:08:18.463 "uuid": "c559dce7-76ac-4777-b665-a12b33ac8a7c", 00:08:18.463 "assigned_rate_limits": { 00:08:18.463 "rw_ios_per_sec": 0, 00:08:18.463 "rw_mbytes_per_sec": 0, 00:08:18.463 "r_mbytes_per_sec": 0, 00:08:18.463 "w_mbytes_per_sec": 0 00:08:18.463 }, 00:08:18.463 "claimed": true, 00:08:18.463 "claim_type": "exclusive_write", 00:08:18.463 "zoned": false, 00:08:18.463 "supported_io_types": { 00:08:18.463 "read": true, 00:08:18.463 "write": true, 00:08:18.463 "unmap": true, 00:08:18.463 "flush": true, 00:08:18.463 "reset": true, 00:08:18.463 "nvme_admin": false, 00:08:18.463 "nvme_io": false, 00:08:18.463 "nvme_io_md": false, 00:08:18.463 "write_zeroes": true, 00:08:18.463 "zcopy": true, 00:08:18.463 "get_zone_info": false, 00:08:18.463 "zone_management": false, 00:08:18.463 "zone_append": false, 00:08:18.463 "compare": false, 00:08:18.463 "compare_and_write": false, 00:08:18.463 "abort": true, 00:08:18.463 "seek_hole": false, 00:08:18.463 "seek_data": false, 00:08:18.463 "copy": true, 00:08:18.463 "nvme_iov_md": false 00:08:18.463 }, 00:08:18.463 "memory_domains": [ 00:08:18.463 { 00:08:18.463 "dma_device_id": "system", 00:08:18.463 "dma_device_type": 1 00:08:18.463 }, 00:08:18.463 { 00:08:18.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.464 "dma_device_type": 2 00:08:18.464 } 00:08:18.464 ], 00:08:18.464 "driver_specific": {} 00:08:18.464 } 00:08:18.464 ] 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.464 "name": "Existed_Raid", 00:08:18.464 "uuid": "f01e8d54-63e1-4530-8f26-a3a0bfa7e999", 00:08:18.464 "strip_size_kb": 0, 00:08:18.464 "state": "configuring", 00:08:18.464 "raid_level": "raid1", 00:08:18.464 "superblock": true, 00:08:18.464 "num_base_bdevs": 2, 00:08:18.464 "num_base_bdevs_discovered": 1, 00:08:18.464 "num_base_bdevs_operational": 2, 00:08:18.464 "base_bdevs_list": [ 00:08:18.464 { 00:08:18.464 "name": "BaseBdev1", 00:08:18.464 "uuid": "c559dce7-76ac-4777-b665-a12b33ac8a7c", 00:08:18.464 "is_configured": true, 00:08:18.464 "data_offset": 2048, 00:08:18.464 "data_size": 63488 00:08:18.464 }, 00:08:18.464 { 00:08:18.464 "name": "BaseBdev2", 00:08:18.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.464 "is_configured": false, 00:08:18.464 "data_offset": 0, 00:08:18.464 "data_size": 0 00:08:18.464 } 00:08:18.464 ] 00:08:18.464 }' 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.464 09:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.033 [2024-11-20 09:20:44.313314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.033 [2024-11-20 09:20:44.313401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.033 [2024-11-20 09:20:44.321400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.033 [2024-11-20 09:20:44.323772] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.033 [2024-11-20 09:20:44.323921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.033 "name": "Existed_Raid", 00:08:19.033 "uuid": "5a81c2a9-d7b3-4839-ae0c-84ffaad01cfb", 00:08:19.033 "strip_size_kb": 0, 00:08:19.033 "state": "configuring", 00:08:19.033 "raid_level": "raid1", 00:08:19.033 "superblock": true, 00:08:19.033 "num_base_bdevs": 2, 00:08:19.033 "num_base_bdevs_discovered": 1, 00:08:19.033 "num_base_bdevs_operational": 2, 00:08:19.033 "base_bdevs_list": [ 00:08:19.033 { 00:08:19.033 "name": "BaseBdev1", 00:08:19.033 "uuid": "c559dce7-76ac-4777-b665-a12b33ac8a7c", 00:08:19.033 "is_configured": true, 00:08:19.033 "data_offset": 2048, 00:08:19.033 "data_size": 63488 00:08:19.033 }, 00:08:19.033 { 00:08:19.033 "name": "BaseBdev2", 00:08:19.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.033 "is_configured": false, 00:08:19.033 "data_offset": 0, 00:08:19.033 "data_size": 0 00:08:19.033 } 00:08:19.033 ] 00:08:19.033 }' 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.033 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.293 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:19.293 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.293 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.552 [2024-11-20 09:20:44.792873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.552 [2024-11-20 09:20:44.793351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:19.552 [2024-11-20 09:20:44.793424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:19.552 [2024-11-20 09:20:44.793850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:19.552 BaseBdev2 00:08:19.552 [2024-11-20 09:20:44.794103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:19.552 [2024-11-20 09:20:44.794167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:19.552 [2024-11-20 09:20:44.794390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.552 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.552 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:19.552 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:19.552 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.552 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:19.552 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.552 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.552 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.552 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.553 [ 00:08:19.553 { 00:08:19.553 "name": "BaseBdev2", 00:08:19.553 "aliases": [ 00:08:19.553 "13b59e02-28f9-4ded-b1f1-07f905122e29" 00:08:19.553 ], 00:08:19.553 "product_name": "Malloc disk", 00:08:19.553 "block_size": 512, 00:08:19.553 "num_blocks": 65536, 00:08:19.553 "uuid": "13b59e02-28f9-4ded-b1f1-07f905122e29", 00:08:19.553 "assigned_rate_limits": { 00:08:19.553 "rw_ios_per_sec": 0, 00:08:19.553 "rw_mbytes_per_sec": 0, 00:08:19.553 "r_mbytes_per_sec": 0, 00:08:19.553 "w_mbytes_per_sec": 0 00:08:19.553 }, 00:08:19.553 "claimed": true, 00:08:19.553 "claim_type": "exclusive_write", 00:08:19.553 "zoned": false, 00:08:19.553 "supported_io_types": { 00:08:19.553 "read": true, 00:08:19.553 "write": true, 00:08:19.553 "unmap": true, 00:08:19.553 "flush": true, 00:08:19.553 "reset": true, 00:08:19.553 "nvme_admin": false, 00:08:19.553 "nvme_io": false, 00:08:19.553 "nvme_io_md": false, 00:08:19.553 "write_zeroes": true, 00:08:19.553 "zcopy": true, 00:08:19.553 "get_zone_info": false, 00:08:19.553 "zone_management": false, 00:08:19.553 "zone_append": false, 00:08:19.553 "compare": false, 00:08:19.553 "compare_and_write": false, 00:08:19.553 "abort": true, 00:08:19.553 "seek_hole": false, 00:08:19.553 "seek_data": false, 00:08:19.553 "copy": true, 00:08:19.553 "nvme_iov_md": false 00:08:19.553 }, 00:08:19.553 "memory_domains": [ 00:08:19.553 { 00:08:19.553 "dma_device_id": "system", 00:08:19.553 "dma_device_type": 1 00:08:19.553 }, 00:08:19.553 { 00:08:19.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.553 "dma_device_type": 2 00:08:19.553 } 00:08:19.553 ], 00:08:19.553 "driver_specific": {} 00:08:19.553 } 00:08:19.553 ] 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.553 "name": "Existed_Raid", 00:08:19.553 "uuid": "5a81c2a9-d7b3-4839-ae0c-84ffaad01cfb", 00:08:19.553 "strip_size_kb": 0, 00:08:19.553 "state": "online", 00:08:19.553 "raid_level": "raid1", 00:08:19.553 "superblock": true, 00:08:19.553 "num_base_bdevs": 2, 00:08:19.553 "num_base_bdevs_discovered": 2, 00:08:19.553 "num_base_bdevs_operational": 2, 00:08:19.553 "base_bdevs_list": [ 00:08:19.553 { 00:08:19.553 "name": "BaseBdev1", 00:08:19.553 "uuid": "c559dce7-76ac-4777-b665-a12b33ac8a7c", 00:08:19.553 "is_configured": true, 00:08:19.553 "data_offset": 2048, 00:08:19.553 "data_size": 63488 00:08:19.553 }, 00:08:19.553 { 00:08:19.553 "name": "BaseBdev2", 00:08:19.553 "uuid": "13b59e02-28f9-4ded-b1f1-07f905122e29", 00:08:19.553 "is_configured": true, 00:08:19.553 "data_offset": 2048, 00:08:19.553 "data_size": 63488 00:08:19.553 } 00:08:19.553 ] 00:08:19.553 }' 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.553 09:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.204 [2024-11-20 09:20:45.296538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.204 "name": "Existed_Raid", 00:08:20.204 "aliases": [ 00:08:20.204 "5a81c2a9-d7b3-4839-ae0c-84ffaad01cfb" 00:08:20.204 ], 00:08:20.204 "product_name": "Raid Volume", 00:08:20.204 "block_size": 512, 00:08:20.204 "num_blocks": 63488, 00:08:20.204 "uuid": "5a81c2a9-d7b3-4839-ae0c-84ffaad01cfb", 00:08:20.204 "assigned_rate_limits": { 00:08:20.204 "rw_ios_per_sec": 0, 00:08:20.204 "rw_mbytes_per_sec": 0, 00:08:20.204 "r_mbytes_per_sec": 0, 00:08:20.204 "w_mbytes_per_sec": 0 00:08:20.204 }, 00:08:20.204 "claimed": false, 00:08:20.204 "zoned": false, 00:08:20.204 "supported_io_types": { 00:08:20.204 "read": true, 00:08:20.204 "write": true, 00:08:20.204 "unmap": false, 00:08:20.204 "flush": false, 00:08:20.204 "reset": true, 00:08:20.204 "nvme_admin": false, 00:08:20.204 "nvme_io": false, 00:08:20.204 "nvme_io_md": false, 00:08:20.204 "write_zeroes": true, 00:08:20.204 "zcopy": false, 00:08:20.204 "get_zone_info": false, 00:08:20.204 "zone_management": false, 00:08:20.204 "zone_append": false, 00:08:20.204 "compare": false, 00:08:20.204 "compare_and_write": false, 00:08:20.204 "abort": false, 00:08:20.204 "seek_hole": false, 00:08:20.204 "seek_data": false, 00:08:20.204 "copy": false, 00:08:20.204 "nvme_iov_md": false 00:08:20.204 }, 00:08:20.204 "memory_domains": [ 00:08:20.204 { 00:08:20.204 "dma_device_id": "system", 00:08:20.204 "dma_device_type": 1 00:08:20.204 }, 00:08:20.204 { 00:08:20.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.204 "dma_device_type": 2 00:08:20.204 }, 00:08:20.204 { 00:08:20.204 "dma_device_id": "system", 00:08:20.204 "dma_device_type": 1 00:08:20.204 }, 00:08:20.204 { 00:08:20.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.204 "dma_device_type": 2 00:08:20.204 } 00:08:20.204 ], 00:08:20.204 "driver_specific": { 00:08:20.204 "raid": { 00:08:20.204 "uuid": "5a81c2a9-d7b3-4839-ae0c-84ffaad01cfb", 00:08:20.204 "strip_size_kb": 0, 00:08:20.204 "state": "online", 00:08:20.204 "raid_level": "raid1", 00:08:20.204 "superblock": true, 00:08:20.204 "num_base_bdevs": 2, 00:08:20.204 "num_base_bdevs_discovered": 2, 00:08:20.204 "num_base_bdevs_operational": 2, 00:08:20.204 "base_bdevs_list": [ 00:08:20.204 { 00:08:20.204 "name": "BaseBdev1", 00:08:20.204 "uuid": "c559dce7-76ac-4777-b665-a12b33ac8a7c", 00:08:20.204 "is_configured": true, 00:08:20.204 "data_offset": 2048, 00:08:20.204 "data_size": 63488 00:08:20.204 }, 00:08:20.204 { 00:08:20.204 "name": "BaseBdev2", 00:08:20.204 "uuid": "13b59e02-28f9-4ded-b1f1-07f905122e29", 00:08:20.204 "is_configured": true, 00:08:20.204 "data_offset": 2048, 00:08:20.204 "data_size": 63488 00:08:20.204 } 00:08:20.204 ] 00:08:20.204 } 00:08:20.204 } 00:08:20.204 }' 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.204 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:20.204 BaseBdev2' 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.205 [2024-11-20 09:20:45.515935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.205 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.464 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.464 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.464 "name": "Existed_Raid", 00:08:20.464 "uuid": "5a81c2a9-d7b3-4839-ae0c-84ffaad01cfb", 00:08:20.464 "strip_size_kb": 0, 00:08:20.464 "state": "online", 00:08:20.464 "raid_level": "raid1", 00:08:20.464 "superblock": true, 00:08:20.464 "num_base_bdevs": 2, 00:08:20.464 "num_base_bdevs_discovered": 1, 00:08:20.464 "num_base_bdevs_operational": 1, 00:08:20.464 "base_bdevs_list": [ 00:08:20.464 { 00:08:20.464 "name": null, 00:08:20.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.464 "is_configured": false, 00:08:20.464 "data_offset": 0, 00:08:20.464 "data_size": 63488 00:08:20.464 }, 00:08:20.464 { 00:08:20.464 "name": "BaseBdev2", 00:08:20.464 "uuid": "13b59e02-28f9-4ded-b1f1-07f905122e29", 00:08:20.464 "is_configured": true, 00:08:20.464 "data_offset": 2048, 00:08:20.464 "data_size": 63488 00:08:20.464 } 00:08:20.464 ] 00:08:20.464 }' 00:08:20.464 09:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.464 09:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.724 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:20.724 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.724 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.724 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.724 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.724 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:20.724 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.984 [2024-11-20 09:20:46.194046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:20.984 [2024-11-20 09:20:46.194275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.984 [2024-11-20 09:20:46.316758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.984 [2024-11-20 09:20:46.316988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.984 [2024-11-20 09:20:46.317017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63157 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63157 ']' 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63157 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63157 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63157' 00:08:20.984 killing process with pid 63157 00:08:20.984 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63157 00:08:20.985 [2024-11-20 09:20:46.421211] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.985 09:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63157 00:08:21.244 [2024-11-20 09:20:46.444603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.625 09:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:22.625 00:08:22.625 real 0m5.702s 00:08:22.625 user 0m7.974s 00:08:22.625 sys 0m1.034s 00:08:22.625 09:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.625 ************************************ 00:08:22.625 END TEST raid_state_function_test_sb 00:08:22.625 ************************************ 00:08:22.625 09:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.626 09:20:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:22.626 09:20:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:22.626 09:20:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.626 09:20:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.626 ************************************ 00:08:22.626 START TEST raid_superblock_test 00:08:22.626 ************************************ 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:22.626 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:22.627 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63419 00:08:22.627 09:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63419 00:08:22.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.627 09:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63419 ']' 00:08:22.627 09:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.627 09:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.627 09:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.627 09:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.627 09:20:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.627 [2024-11-20 09:20:48.052883] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:08:22.627 [2024-11-20 09:20:48.053051] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63419 ] 00:08:22.887 [2024-11-20 09:20:48.224941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.146 [2024-11-20 09:20:48.372921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.405 [2024-11-20 09:20:48.644320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.405 [2024-11-20 09:20:48.644413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.665 09:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.665 malloc1 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.665 [2024-11-20 09:20:49.025688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:23.665 [2024-11-20 09:20:49.025856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.665 [2024-11-20 09:20:49.025918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:23.665 [2024-11-20 09:20:49.025975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.665 [2024-11-20 09:20:49.028912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.665 [2024-11-20 09:20:49.029005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:23.665 pt1 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.665 malloc2 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.665 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.665 [2024-11-20 09:20:49.095632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:23.665 [2024-11-20 09:20:49.095717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.665 [2024-11-20 09:20:49.095744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:23.665 [2024-11-20 09:20:49.095755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.665 [2024-11-20 09:20:49.098517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.665 [2024-11-20 09:20:49.098603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:23.666 pt2 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.666 [2024-11-20 09:20:49.107671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:23.666 [2024-11-20 09:20:49.110017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.666 [2024-11-20 09:20:49.110277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:23.666 [2024-11-20 09:20:49.110302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.666 [2024-11-20 09:20:49.110626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:23.666 [2024-11-20 09:20:49.110824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:23.666 [2024-11-20 09:20:49.110843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:23.666 [2024-11-20 09:20:49.111051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.666 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.928 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.928 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.928 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.928 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.928 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.928 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.928 "name": "raid_bdev1", 00:08:23.928 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:23.928 "strip_size_kb": 0, 00:08:23.928 "state": "online", 00:08:23.928 "raid_level": "raid1", 00:08:23.928 "superblock": true, 00:08:23.928 "num_base_bdevs": 2, 00:08:23.928 "num_base_bdevs_discovered": 2, 00:08:23.928 "num_base_bdevs_operational": 2, 00:08:23.928 "base_bdevs_list": [ 00:08:23.928 { 00:08:23.928 "name": "pt1", 00:08:23.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.928 "is_configured": true, 00:08:23.928 "data_offset": 2048, 00:08:23.928 "data_size": 63488 00:08:23.928 }, 00:08:23.928 { 00:08:23.928 "name": "pt2", 00:08:23.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.928 "is_configured": true, 00:08:23.928 "data_offset": 2048, 00:08:23.928 "data_size": 63488 00:08:23.928 } 00:08:23.928 ] 00:08:23.928 }' 00:08:23.928 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.928 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.191 [2024-11-20 09:20:49.579251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.191 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.191 "name": "raid_bdev1", 00:08:24.191 "aliases": [ 00:08:24.191 "ac663dc1-e402-429a-8124-2d230d44baab" 00:08:24.191 ], 00:08:24.191 "product_name": "Raid Volume", 00:08:24.191 "block_size": 512, 00:08:24.191 "num_blocks": 63488, 00:08:24.191 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:24.191 "assigned_rate_limits": { 00:08:24.191 "rw_ios_per_sec": 0, 00:08:24.191 "rw_mbytes_per_sec": 0, 00:08:24.191 "r_mbytes_per_sec": 0, 00:08:24.191 "w_mbytes_per_sec": 0 00:08:24.191 }, 00:08:24.191 "claimed": false, 00:08:24.191 "zoned": false, 00:08:24.191 "supported_io_types": { 00:08:24.191 "read": true, 00:08:24.191 "write": true, 00:08:24.191 "unmap": false, 00:08:24.191 "flush": false, 00:08:24.191 "reset": true, 00:08:24.191 "nvme_admin": false, 00:08:24.191 "nvme_io": false, 00:08:24.191 "nvme_io_md": false, 00:08:24.191 "write_zeroes": true, 00:08:24.191 "zcopy": false, 00:08:24.191 "get_zone_info": false, 00:08:24.191 "zone_management": false, 00:08:24.191 "zone_append": false, 00:08:24.191 "compare": false, 00:08:24.191 "compare_and_write": false, 00:08:24.191 "abort": false, 00:08:24.191 "seek_hole": false, 00:08:24.191 "seek_data": false, 00:08:24.191 "copy": false, 00:08:24.191 "nvme_iov_md": false 00:08:24.191 }, 00:08:24.191 "memory_domains": [ 00:08:24.191 { 00:08:24.191 "dma_device_id": "system", 00:08:24.191 "dma_device_type": 1 00:08:24.191 }, 00:08:24.191 { 00:08:24.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.191 "dma_device_type": 2 00:08:24.191 }, 00:08:24.191 { 00:08:24.191 "dma_device_id": "system", 00:08:24.191 "dma_device_type": 1 00:08:24.191 }, 00:08:24.191 { 00:08:24.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.191 "dma_device_type": 2 00:08:24.191 } 00:08:24.191 ], 00:08:24.191 "driver_specific": { 00:08:24.191 "raid": { 00:08:24.191 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:24.191 "strip_size_kb": 0, 00:08:24.191 "state": "online", 00:08:24.191 "raid_level": "raid1", 00:08:24.191 "superblock": true, 00:08:24.191 "num_base_bdevs": 2, 00:08:24.191 "num_base_bdevs_discovered": 2, 00:08:24.191 "num_base_bdevs_operational": 2, 00:08:24.191 "base_bdevs_list": [ 00:08:24.191 { 00:08:24.191 "name": "pt1", 00:08:24.191 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.191 "is_configured": true, 00:08:24.191 "data_offset": 2048, 00:08:24.191 "data_size": 63488 00:08:24.191 }, 00:08:24.191 { 00:08:24.191 "name": "pt2", 00:08:24.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.191 "is_configured": true, 00:08:24.191 "data_offset": 2048, 00:08:24.191 "data_size": 63488 00:08:24.191 } 00:08:24.191 ] 00:08:24.191 } 00:08:24.191 } 00:08:24.191 }' 00:08:24.192 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:24.496 pt2' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 [2024-11-20 09:20:49.834890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ac663dc1-e402-429a-8124-2d230d44baab 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ac663dc1-e402-429a-8124-2d230d44baab ']' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 [2024-11-20 09:20:49.882445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.496 [2024-11-20 09:20:49.882567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.496 [2024-11-20 09:20:49.882740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.496 [2024-11-20 09:20:49.882863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.496 [2024-11-20 09:20:49.882926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:24.496 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:24.497 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.497 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.757 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.757 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:24.757 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.757 09:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:24.757 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.757 09:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.757 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.757 [2024-11-20 09:20:50.022251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:24.757 [2024-11-20 09:20:50.025030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:24.757 [2024-11-20 09:20:50.025191] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:24.758 [2024-11-20 09:20:50.025317] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:24.758 [2024-11-20 09:20:50.025399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.758 [2024-11-20 09:20:50.025418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:24.758 request: 00:08:24.758 { 00:08:24.758 "name": "raid_bdev1", 00:08:24.758 "raid_level": "raid1", 00:08:24.758 "base_bdevs": [ 00:08:24.758 "malloc1", 00:08:24.758 "malloc2" 00:08:24.758 ], 00:08:24.758 "superblock": false, 00:08:24.758 "method": "bdev_raid_create", 00:08:24.758 "req_id": 1 00:08:24.758 } 00:08:24.758 Got JSON-RPC error response 00:08:24.758 response: 00:08:24.758 { 00:08:24.758 "code": -17, 00:08:24.758 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:24.758 } 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.758 [2024-11-20 09:20:50.086137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.758 [2024-11-20 09:20:50.086235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.758 [2024-11-20 09:20:50.086264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:24.758 [2024-11-20 09:20:50.086279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.758 [2024-11-20 09:20:50.089338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.758 [2024-11-20 09:20:50.089398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.758 [2024-11-20 09:20:50.089546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:24.758 [2024-11-20 09:20:50.089634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.758 pt1 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.758 "name": "raid_bdev1", 00:08:24.758 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:24.758 "strip_size_kb": 0, 00:08:24.758 "state": "configuring", 00:08:24.758 "raid_level": "raid1", 00:08:24.758 "superblock": true, 00:08:24.758 "num_base_bdevs": 2, 00:08:24.758 "num_base_bdevs_discovered": 1, 00:08:24.758 "num_base_bdevs_operational": 2, 00:08:24.758 "base_bdevs_list": [ 00:08:24.758 { 00:08:24.758 "name": "pt1", 00:08:24.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.758 "is_configured": true, 00:08:24.758 "data_offset": 2048, 00:08:24.758 "data_size": 63488 00:08:24.758 }, 00:08:24.758 { 00:08:24.758 "name": null, 00:08:24.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.758 "is_configured": false, 00:08:24.758 "data_offset": 2048, 00:08:24.758 "data_size": 63488 00:08:24.758 } 00:08:24.758 ] 00:08:24.758 }' 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.758 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.328 [2024-11-20 09:20:50.613326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:25.328 [2024-11-20 09:20:50.613517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.328 [2024-11-20 09:20:50.613555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:25.328 [2024-11-20 09:20:50.613570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.328 [2024-11-20 09:20:50.614197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.328 [2024-11-20 09:20:50.614222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:25.328 [2024-11-20 09:20:50.614332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:25.328 [2024-11-20 09:20:50.614363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:25.328 [2024-11-20 09:20:50.614538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:25.328 [2024-11-20 09:20:50.614555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:25.328 [2024-11-20 09:20:50.614868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:25.328 [2024-11-20 09:20:50.615064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:25.328 [2024-11-20 09:20:50.615075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:25.328 [2024-11-20 09:20:50.615261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.328 pt2 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.328 "name": "raid_bdev1", 00:08:25.328 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:25.328 "strip_size_kb": 0, 00:08:25.328 "state": "online", 00:08:25.328 "raid_level": "raid1", 00:08:25.328 "superblock": true, 00:08:25.328 "num_base_bdevs": 2, 00:08:25.328 "num_base_bdevs_discovered": 2, 00:08:25.328 "num_base_bdevs_operational": 2, 00:08:25.328 "base_bdevs_list": [ 00:08:25.328 { 00:08:25.328 "name": "pt1", 00:08:25.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.328 "is_configured": true, 00:08:25.328 "data_offset": 2048, 00:08:25.328 "data_size": 63488 00:08:25.328 }, 00:08:25.328 { 00:08:25.328 "name": "pt2", 00:08:25.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.328 "is_configured": true, 00:08:25.328 "data_offset": 2048, 00:08:25.328 "data_size": 63488 00:08:25.328 } 00:08:25.328 ] 00:08:25.328 }' 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.328 09:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.587 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.587 [2024-11-20 09:20:51.032992] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.846 "name": "raid_bdev1", 00:08:25.846 "aliases": [ 00:08:25.846 "ac663dc1-e402-429a-8124-2d230d44baab" 00:08:25.846 ], 00:08:25.846 "product_name": "Raid Volume", 00:08:25.846 "block_size": 512, 00:08:25.846 "num_blocks": 63488, 00:08:25.846 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:25.846 "assigned_rate_limits": { 00:08:25.846 "rw_ios_per_sec": 0, 00:08:25.846 "rw_mbytes_per_sec": 0, 00:08:25.846 "r_mbytes_per_sec": 0, 00:08:25.846 "w_mbytes_per_sec": 0 00:08:25.846 }, 00:08:25.846 "claimed": false, 00:08:25.846 "zoned": false, 00:08:25.846 "supported_io_types": { 00:08:25.846 "read": true, 00:08:25.846 "write": true, 00:08:25.846 "unmap": false, 00:08:25.846 "flush": false, 00:08:25.846 "reset": true, 00:08:25.846 "nvme_admin": false, 00:08:25.846 "nvme_io": false, 00:08:25.846 "nvme_io_md": false, 00:08:25.846 "write_zeroes": true, 00:08:25.846 "zcopy": false, 00:08:25.846 "get_zone_info": false, 00:08:25.846 "zone_management": false, 00:08:25.846 "zone_append": false, 00:08:25.846 "compare": false, 00:08:25.846 "compare_and_write": false, 00:08:25.846 "abort": false, 00:08:25.846 "seek_hole": false, 00:08:25.846 "seek_data": false, 00:08:25.846 "copy": false, 00:08:25.846 "nvme_iov_md": false 00:08:25.846 }, 00:08:25.846 "memory_domains": [ 00:08:25.846 { 00:08:25.846 "dma_device_id": "system", 00:08:25.846 "dma_device_type": 1 00:08:25.846 }, 00:08:25.846 { 00:08:25.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.846 "dma_device_type": 2 00:08:25.846 }, 00:08:25.846 { 00:08:25.846 "dma_device_id": "system", 00:08:25.846 "dma_device_type": 1 00:08:25.846 }, 00:08:25.846 { 00:08:25.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.846 "dma_device_type": 2 00:08:25.846 } 00:08:25.846 ], 00:08:25.846 "driver_specific": { 00:08:25.846 "raid": { 00:08:25.846 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:25.846 "strip_size_kb": 0, 00:08:25.846 "state": "online", 00:08:25.846 "raid_level": "raid1", 00:08:25.846 "superblock": true, 00:08:25.846 "num_base_bdevs": 2, 00:08:25.846 "num_base_bdevs_discovered": 2, 00:08:25.846 "num_base_bdevs_operational": 2, 00:08:25.846 "base_bdevs_list": [ 00:08:25.846 { 00:08:25.846 "name": "pt1", 00:08:25.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.846 "is_configured": true, 00:08:25.846 "data_offset": 2048, 00:08:25.846 "data_size": 63488 00:08:25.846 }, 00:08:25.846 { 00:08:25.846 "name": "pt2", 00:08:25.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.846 "is_configured": true, 00:08:25.846 "data_offset": 2048, 00:08:25.846 "data_size": 63488 00:08:25.846 } 00:08:25.846 ] 00:08:25.846 } 00:08:25.846 } 00:08:25.846 }' 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:25.846 pt2' 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.846 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.847 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.847 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.847 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.847 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.847 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.847 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:25.847 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.847 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.847 [2024-11-20 09:20:51.268597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.847 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ac663dc1-e402-429a-8124-2d230d44baab '!=' ac663dc1-e402-429a-8124-2d230d44baab ']' 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.106 [2024-11-20 09:20:51.312271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.106 "name": "raid_bdev1", 00:08:26.106 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:26.106 "strip_size_kb": 0, 00:08:26.106 "state": "online", 00:08:26.106 "raid_level": "raid1", 00:08:26.106 "superblock": true, 00:08:26.106 "num_base_bdevs": 2, 00:08:26.106 "num_base_bdevs_discovered": 1, 00:08:26.106 "num_base_bdevs_operational": 1, 00:08:26.106 "base_bdevs_list": [ 00:08:26.106 { 00:08:26.106 "name": null, 00:08:26.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.106 "is_configured": false, 00:08:26.106 "data_offset": 0, 00:08:26.106 "data_size": 63488 00:08:26.106 }, 00:08:26.106 { 00:08:26.106 "name": "pt2", 00:08:26.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.106 "is_configured": true, 00:08:26.106 "data_offset": 2048, 00:08:26.106 "data_size": 63488 00:08:26.106 } 00:08:26.106 ] 00:08:26.106 }' 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.106 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.365 [2024-11-20 09:20:51.747601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.365 [2024-11-20 09:20:51.747740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.365 [2024-11-20 09:20:51.747908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.365 [2024-11-20 09:20:51.748019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.365 [2024-11-20 09:20:51.748079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.365 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.624 [2024-11-20 09:20:51.823498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.624 [2024-11-20 09:20:51.823700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.624 [2024-11-20 09:20:51.823760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:26.624 [2024-11-20 09:20:51.823809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.624 [2024-11-20 09:20:51.826861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.624 [2024-11-20 09:20:51.827011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.624 [2024-11-20 09:20:51.827184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:26.624 [2024-11-20 09:20:51.827284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.624 [2024-11-20 09:20:51.827478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:26.624 [2024-11-20 09:20:51.827532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:26.624 [2024-11-20 09:20:51.827878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:26.624 [2024-11-20 09:20:51.828122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:26.624 [2024-11-20 09:20:51.828173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:26.624 [2024-11-20 09:20:51.828501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.624 pt2 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.624 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.624 "name": "raid_bdev1", 00:08:26.624 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:26.624 "strip_size_kb": 0, 00:08:26.624 "state": "online", 00:08:26.624 "raid_level": "raid1", 00:08:26.624 "superblock": true, 00:08:26.625 "num_base_bdevs": 2, 00:08:26.625 "num_base_bdevs_discovered": 1, 00:08:26.625 "num_base_bdevs_operational": 1, 00:08:26.625 "base_bdevs_list": [ 00:08:26.625 { 00:08:26.625 "name": null, 00:08:26.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.625 "is_configured": false, 00:08:26.625 "data_offset": 2048, 00:08:26.625 "data_size": 63488 00:08:26.625 }, 00:08:26.625 { 00:08:26.625 "name": "pt2", 00:08:26.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.625 "is_configured": true, 00:08:26.625 "data_offset": 2048, 00:08:26.625 "data_size": 63488 00:08:26.625 } 00:08:26.625 ] 00:08:26.625 }' 00:08:26.625 09:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.625 09:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.884 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:26.884 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.884 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.884 [2024-11-20 09:20:52.330732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.884 [2024-11-20 09:20:52.330770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.884 [2024-11-20 09:20:52.330886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.884 [2024-11-20 09:20:52.330955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.884 [2024-11-20 09:20:52.330968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:26.884 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.143 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.143 [2024-11-20 09:20:52.394674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.143 [2024-11-20 09:20:52.394763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.143 [2024-11-20 09:20:52.394791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:27.143 [2024-11-20 09:20:52.394801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.143 [2024-11-20 09:20:52.397835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.143 [2024-11-20 09:20:52.397928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.143 [2024-11-20 09:20:52.398077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:27.144 [2024-11-20 09:20:52.398146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.144 [2024-11-20 09:20:52.398330] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:27.144 [2024-11-20 09:20:52.398344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.144 [2024-11-20 09:20:52.398365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:27.144 [2024-11-20 09:20:52.398465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:27.144 [2024-11-20 09:20:52.398574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:27.144 [2024-11-20 09:20:52.398585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:27.144 [2024-11-20 09:20:52.398907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:27.144 [2024-11-20 09:20:52.399090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:27.144 [2024-11-20 09:20:52.399106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:27.144 [2024-11-20 09:20:52.399344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.144 pt1 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.144 "name": "raid_bdev1", 00:08:27.144 "uuid": "ac663dc1-e402-429a-8124-2d230d44baab", 00:08:27.144 "strip_size_kb": 0, 00:08:27.144 "state": "online", 00:08:27.144 "raid_level": "raid1", 00:08:27.144 "superblock": true, 00:08:27.144 "num_base_bdevs": 2, 00:08:27.144 "num_base_bdevs_discovered": 1, 00:08:27.144 "num_base_bdevs_operational": 1, 00:08:27.144 "base_bdevs_list": [ 00:08:27.144 { 00:08:27.144 "name": null, 00:08:27.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.144 "is_configured": false, 00:08:27.144 "data_offset": 2048, 00:08:27.144 "data_size": 63488 00:08:27.144 }, 00:08:27.144 { 00:08:27.144 "name": "pt2", 00:08:27.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.144 "is_configured": true, 00:08:27.144 "data_offset": 2048, 00:08:27.144 "data_size": 63488 00:08:27.144 } 00:08:27.144 ] 00:08:27.144 }' 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.144 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.403 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:27.403 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:27.403 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.403 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.403 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.403 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:27.662 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.663 [2024-11-20 09:20:52.866207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ac663dc1-e402-429a-8124-2d230d44baab '!=' ac663dc1-e402-429a-8124-2d230d44baab ']' 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63419 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63419 ']' 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63419 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63419 00:08:27.663 killing process with pid 63419 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63419' 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63419 00:08:27.663 [2024-11-20 09:20:52.946045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.663 [2024-11-20 09:20:52.946181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.663 09:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63419 00:08:27.663 [2024-11-20 09:20:52.946244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.663 [2024-11-20 09:20:52.946262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:27.921 [2024-11-20 09:20:53.195127] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.300 09:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:29.300 00:08:29.300 real 0m6.604s 00:08:29.300 user 0m9.765s 00:08:29.300 sys 0m1.293s 00:08:29.300 09:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.300 09:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.300 ************************************ 00:08:29.300 END TEST raid_superblock_test 00:08:29.300 ************************************ 00:08:29.300 09:20:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:29.300 09:20:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:29.300 09:20:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.300 09:20:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.300 ************************************ 00:08:29.300 START TEST raid_read_error_test 00:08:29.300 ************************************ 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.c4Sp2QYmKj 00:08:29.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63750 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63750 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63750 ']' 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.300 09:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.300 [2024-11-20 09:20:54.733087] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:08:29.300 [2024-11-20 09:20:54.733379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63750 ] 00:08:29.560 [2024-11-20 09:20:54.920158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.818 [2024-11-20 09:20:55.080277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.077 [2024-11-20 09:20:55.356985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.077 [2024-11-20 09:20:55.357054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.337 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.337 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.337 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.337 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:30.337 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.337 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.337 BaseBdev1_malloc 00:08:30.337 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.337 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:30.337 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 true 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 [2024-11-20 09:20:55.693874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:30.338 [2024-11-20 09:20:55.694030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.338 [2024-11-20 09:20:55.694082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:30.338 [2024-11-20 09:20:55.694125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.338 [2024-11-20 09:20:55.697108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.338 [2024-11-20 09:20:55.697236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:30.338 BaseBdev1 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 BaseBdev2_malloc 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 true 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 [2024-11-20 09:20:55.774892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:30.338 [2024-11-20 09:20:55.775020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.338 [2024-11-20 09:20:55.775075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:30.338 [2024-11-20 09:20:55.775117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.338 [2024-11-20 09:20:55.778073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.338 [2024-11-20 09:20:55.778188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:30.338 BaseBdev2 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.338 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.338 [2024-11-20 09:20:55.786953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.338 [2024-11-20 09:20:55.789548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.338 [2024-11-20 09:20:55.789807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:30.338 [2024-11-20 09:20:55.789827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:30.338 [2024-11-20 09:20:55.790149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:30.338 [2024-11-20 09:20:55.790397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:30.338 [2024-11-20 09:20:55.790411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:30.338 [2024-11-20 09:20:55.790635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.597 "name": "raid_bdev1", 00:08:30.597 "uuid": "61abde9c-2853-403b-984a-e7a467118d35", 00:08:30.597 "strip_size_kb": 0, 00:08:30.597 "state": "online", 00:08:30.597 "raid_level": "raid1", 00:08:30.597 "superblock": true, 00:08:30.597 "num_base_bdevs": 2, 00:08:30.597 "num_base_bdevs_discovered": 2, 00:08:30.597 "num_base_bdevs_operational": 2, 00:08:30.597 "base_bdevs_list": [ 00:08:30.597 { 00:08:30.597 "name": "BaseBdev1", 00:08:30.597 "uuid": "8c2ceb1d-418a-5d01-a92a-a87d0f0849c7", 00:08:30.597 "is_configured": true, 00:08:30.597 "data_offset": 2048, 00:08:30.597 "data_size": 63488 00:08:30.597 }, 00:08:30.597 { 00:08:30.597 "name": "BaseBdev2", 00:08:30.597 "uuid": "87a730f0-431e-5981-a961-e5368191de5f", 00:08:30.597 "is_configured": true, 00:08:30.597 "data_offset": 2048, 00:08:30.597 "data_size": 63488 00:08:30.597 } 00:08:30.597 ] 00:08:30.597 }' 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.597 09:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.856 09:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.856 09:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:31.115 [2024-11-20 09:20:56.379935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.054 "name": "raid_bdev1", 00:08:32.054 "uuid": "61abde9c-2853-403b-984a-e7a467118d35", 00:08:32.054 "strip_size_kb": 0, 00:08:32.054 "state": "online", 00:08:32.054 "raid_level": "raid1", 00:08:32.054 "superblock": true, 00:08:32.054 "num_base_bdevs": 2, 00:08:32.054 "num_base_bdevs_discovered": 2, 00:08:32.054 "num_base_bdevs_operational": 2, 00:08:32.054 "base_bdevs_list": [ 00:08:32.054 { 00:08:32.054 "name": "BaseBdev1", 00:08:32.054 "uuid": "8c2ceb1d-418a-5d01-a92a-a87d0f0849c7", 00:08:32.054 "is_configured": true, 00:08:32.054 "data_offset": 2048, 00:08:32.054 "data_size": 63488 00:08:32.054 }, 00:08:32.054 { 00:08:32.054 "name": "BaseBdev2", 00:08:32.054 "uuid": "87a730f0-431e-5981-a961-e5368191de5f", 00:08:32.054 "is_configured": true, 00:08:32.054 "data_offset": 2048, 00:08:32.054 "data_size": 63488 00:08:32.054 } 00:08:32.054 ] 00:08:32.054 }' 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.054 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.624 [2024-11-20 09:20:57.788231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.624 [2024-11-20 09:20:57.788344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.624 [2024-11-20 09:20:57.791555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.624 [2024-11-20 09:20:57.791660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.624 [2024-11-20 09:20:57.791794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.624 [2024-11-20 09:20:57.791851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63750 00:08:32.624 { 00:08:32.624 "results": [ 00:08:32.624 { 00:08:32.624 "job": "raid_bdev1", 00:08:32.624 "core_mask": "0x1", 00:08:32.624 "workload": "randrw", 00:08:32.624 "percentage": 50, 00:08:32.624 "status": "finished", 00:08:32.624 "queue_depth": 1, 00:08:32.624 "io_size": 131072, 00:08:32.624 "runtime": 1.408544, 00:08:32.624 "iops": 11898.101869732149, 00:08:32.624 "mibps": 1487.2627337165186, 00:08:32.624 "io_failed": 0, 00:08:32.624 "io_timeout": 0, 00:08:32.624 "avg_latency_us": 80.89665864212697, 00:08:32.624 "min_latency_us": 25.152838427947597, 00:08:32.624 "max_latency_us": 1674.172925764192 00:08:32.624 } 00:08:32.624 ], 00:08:32.624 "core_count": 1 00:08:32.624 } 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63750 ']' 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63750 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.624 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63750 00:08:32.624 killing process with pid 63750 00:08:32.625 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.625 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.625 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63750' 00:08:32.625 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63750 00:08:32.625 [2024-11-20 09:20:57.834814] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.625 09:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63750 00:08:32.625 [2024-11-20 09:20:58.011177] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.c4Sp2QYmKj 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:34.535 ************************************ 00:08:34.535 END TEST raid_read_error_test 00:08:34.535 ************************************ 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:34.535 00:08:34.535 real 0m4.904s 00:08:34.535 user 0m5.751s 00:08:34.535 sys 0m0.729s 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.535 09:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 09:20:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:34.535 09:20:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:34.535 09:20:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.535 09:20:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 ************************************ 00:08:34.535 START TEST raid_write_error_test 00:08:34.535 ************************************ 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pa5ZKzLlBk 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63897 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63897 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63897 ']' 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.535 09:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.535 [2024-11-20 09:20:59.690188] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:08:34.536 [2024-11-20 09:20:59.690333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63897 ] 00:08:34.536 [2024-11-20 09:20:59.857191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.795 [2024-11-20 09:21:00.023181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.056 [2024-11-20 09:21:00.293417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.056 [2024-11-20 09:21:00.293532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.315 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.315 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:35.315 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.315 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:35.315 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.315 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.315 BaseBdev1_malloc 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.316 true 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.316 [2024-11-20 09:21:00.687142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:35.316 [2024-11-20 09:21:00.687249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.316 [2024-11-20 09:21:00.687283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:35.316 [2024-11-20 09:21:00.687298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.316 [2024-11-20 09:21:00.690527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.316 [2024-11-20 09:21:00.690586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:35.316 BaseBdev1 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.316 BaseBdev2_malloc 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.316 true 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.316 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.316 [2024-11-20 09:21:00.766440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:35.316 [2024-11-20 09:21:00.766553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.316 [2024-11-20 09:21:00.766580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:35.316 [2024-11-20 09:21:00.766595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.575 [2024-11-20 09:21:00.769668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.575 [2024-11-20 09:21:00.769772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:35.575 BaseBdev2 00:08:35.575 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.575 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:35.575 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.575 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.575 [2024-11-20 09:21:00.778642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.575 [2024-11-20 09:21:00.781195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.575 [2024-11-20 09:21:00.781555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.576 [2024-11-20 09:21:00.781578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:35.576 [2024-11-20 09:21:00.781923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:35.576 [2024-11-20 09:21:00.782183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.576 [2024-11-20 09:21:00.782195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:35.576 [2024-11-20 09:21:00.782499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.576 "name": "raid_bdev1", 00:08:35.576 "uuid": "c4301555-eb0b-4fec-8011-4c8c839279ef", 00:08:35.576 "strip_size_kb": 0, 00:08:35.576 "state": "online", 00:08:35.576 "raid_level": "raid1", 00:08:35.576 "superblock": true, 00:08:35.576 "num_base_bdevs": 2, 00:08:35.576 "num_base_bdevs_discovered": 2, 00:08:35.576 "num_base_bdevs_operational": 2, 00:08:35.576 "base_bdevs_list": [ 00:08:35.576 { 00:08:35.576 "name": "BaseBdev1", 00:08:35.576 "uuid": "0bcbc107-4088-555d-9db4-1a3b3e974893", 00:08:35.576 "is_configured": true, 00:08:35.576 "data_offset": 2048, 00:08:35.576 "data_size": 63488 00:08:35.576 }, 00:08:35.576 { 00:08:35.576 "name": "BaseBdev2", 00:08:35.576 "uuid": "efc13dc9-63bb-5983-aaae-883a06db44d7", 00:08:35.576 "is_configured": true, 00:08:35.576 "data_offset": 2048, 00:08:35.576 "data_size": 63488 00:08:35.576 } 00:08:35.576 ] 00:08:35.576 }' 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.576 09:21:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.835 09:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:35.835 09:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.096 [2024-11-20 09:21:01.375335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:37.033 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:37.033 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.033 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.033 [2024-11-20 09:21:02.276514] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:37.033 [2024-11-20 09:21:02.276744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.033 [2024-11-20 09:21:02.277016] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:37.033 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.033 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.033 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.034 "name": "raid_bdev1", 00:08:37.034 "uuid": "c4301555-eb0b-4fec-8011-4c8c839279ef", 00:08:37.034 "strip_size_kb": 0, 00:08:37.034 "state": "online", 00:08:37.034 "raid_level": "raid1", 00:08:37.034 "superblock": true, 00:08:37.034 "num_base_bdevs": 2, 00:08:37.034 "num_base_bdevs_discovered": 1, 00:08:37.034 "num_base_bdevs_operational": 1, 00:08:37.034 "base_bdevs_list": [ 00:08:37.034 { 00:08:37.034 "name": null, 00:08:37.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.034 "is_configured": false, 00:08:37.034 "data_offset": 0, 00:08:37.034 "data_size": 63488 00:08:37.034 }, 00:08:37.034 { 00:08:37.034 "name": "BaseBdev2", 00:08:37.034 "uuid": "efc13dc9-63bb-5983-aaae-883a06db44d7", 00:08:37.034 "is_configured": true, 00:08:37.034 "data_offset": 2048, 00:08:37.034 "data_size": 63488 00:08:37.034 } 00:08:37.034 ] 00:08:37.034 }' 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.034 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.602 [2024-11-20 09:21:02.775435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.602 [2024-11-20 09:21:02.775509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.602 [2024-11-20 09:21:02.778727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.602 [2024-11-20 09:21:02.778784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.602 [2024-11-20 09:21:02.778860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.602 [2024-11-20 09:21:02.778875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:37.602 { 00:08:37.602 "results": [ 00:08:37.602 { 00:08:37.602 "job": "raid_bdev1", 00:08:37.602 "core_mask": "0x1", 00:08:37.602 "workload": "randrw", 00:08:37.602 "percentage": 50, 00:08:37.602 "status": "finished", 00:08:37.602 "queue_depth": 1, 00:08:37.602 "io_size": 131072, 00:08:37.602 "runtime": 1.399941, 00:08:37.602 "iops": 12838.398189637992, 00:08:37.602 "mibps": 1604.799773704749, 00:08:37.602 "io_failed": 0, 00:08:37.602 "io_timeout": 0, 00:08:37.602 "avg_latency_us": 74.5386438707066, 00:08:37.602 "min_latency_us": 24.482096069868994, 00:08:37.602 "max_latency_us": 1774.3371179039302 00:08:37.602 } 00:08:37.602 ], 00:08:37.602 "core_count": 1 00:08:37.602 } 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63897 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63897 ']' 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63897 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63897 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.602 killing process with pid 63897 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63897' 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63897 00:08:37.602 [2024-11-20 09:21:02.828832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.602 09:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63897 00:08:37.602 [2024-11-20 09:21:02.998218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pa5ZKzLlBk 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:39.509 00:08:39.509 real 0m4.943s 00:08:39.509 user 0m5.794s 00:08:39.509 sys 0m0.750s 00:08:39.509 ************************************ 00:08:39.509 END TEST raid_write_error_test 00:08:39.509 ************************************ 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.509 09:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.509 09:21:04 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:39.509 09:21:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:39.509 09:21:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:39.509 09:21:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.509 09:21:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.509 09:21:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.509 ************************************ 00:08:39.509 START TEST raid_state_function_test 00:08:39.509 ************************************ 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:39.509 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64046 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64046' 00:08:39.510 Process raid pid: 64046 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64046 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64046 ']' 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.510 09:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.510 [2024-11-20 09:21:04.697355] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:08:39.510 [2024-11-20 09:21:04.697766] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.510 [2024-11-20 09:21:04.887999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.775 [2024-11-20 09:21:05.054147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.047 [2024-11-20 09:21:05.360923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.047 [2024-11-20 09:21:05.360994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.307 [2024-11-20 09:21:05.653633] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.307 [2024-11-20 09:21:05.653824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.307 [2024-11-20 09:21:05.653868] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.307 [2024-11-20 09:21:05.653900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.307 [2024-11-20 09:21:05.653985] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.307 [2024-11-20 09:21:05.654026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.307 "name": "Existed_Raid", 00:08:40.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.307 "strip_size_kb": 64, 00:08:40.307 "state": "configuring", 00:08:40.307 "raid_level": "raid0", 00:08:40.307 "superblock": false, 00:08:40.307 "num_base_bdevs": 3, 00:08:40.307 "num_base_bdevs_discovered": 0, 00:08:40.307 "num_base_bdevs_operational": 3, 00:08:40.307 "base_bdevs_list": [ 00:08:40.307 { 00:08:40.307 "name": "BaseBdev1", 00:08:40.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.307 "is_configured": false, 00:08:40.307 "data_offset": 0, 00:08:40.307 "data_size": 0 00:08:40.307 }, 00:08:40.307 { 00:08:40.307 "name": "BaseBdev2", 00:08:40.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.307 "is_configured": false, 00:08:40.307 "data_offset": 0, 00:08:40.307 "data_size": 0 00:08:40.307 }, 00:08:40.307 { 00:08:40.307 "name": "BaseBdev3", 00:08:40.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.307 "is_configured": false, 00:08:40.307 "data_offset": 0, 00:08:40.307 "data_size": 0 00:08:40.307 } 00:08:40.307 ] 00:08:40.307 }' 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.307 09:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.876 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.876 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.876 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.876 [2024-11-20 09:21:06.160677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.876 [2024-11-20 09:21:06.160735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:40.876 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.876 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.876 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.877 [2024-11-20 09:21:06.168644] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.877 [2024-11-20 09:21:06.168791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.877 [2024-11-20 09:21:06.168809] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.877 [2024-11-20 09:21:06.168821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.877 [2024-11-20 09:21:06.168828] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.877 [2024-11-20 09:21:06.168840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.877 [2024-11-20 09:21:06.228401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.877 BaseBdev1 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.877 [ 00:08:40.877 { 00:08:40.877 "name": "BaseBdev1", 00:08:40.877 "aliases": [ 00:08:40.877 "5688f666-b4db-42c3-85e8-11c9cbaa5cda" 00:08:40.877 ], 00:08:40.877 "product_name": "Malloc disk", 00:08:40.877 "block_size": 512, 00:08:40.877 "num_blocks": 65536, 00:08:40.877 "uuid": "5688f666-b4db-42c3-85e8-11c9cbaa5cda", 00:08:40.877 "assigned_rate_limits": { 00:08:40.877 "rw_ios_per_sec": 0, 00:08:40.877 "rw_mbytes_per_sec": 0, 00:08:40.877 "r_mbytes_per_sec": 0, 00:08:40.877 "w_mbytes_per_sec": 0 00:08:40.877 }, 00:08:40.877 "claimed": true, 00:08:40.877 "claim_type": "exclusive_write", 00:08:40.877 "zoned": false, 00:08:40.877 "supported_io_types": { 00:08:40.877 "read": true, 00:08:40.877 "write": true, 00:08:40.877 "unmap": true, 00:08:40.877 "flush": true, 00:08:40.877 "reset": true, 00:08:40.877 "nvme_admin": false, 00:08:40.877 "nvme_io": false, 00:08:40.877 "nvme_io_md": false, 00:08:40.877 "write_zeroes": true, 00:08:40.877 "zcopy": true, 00:08:40.877 "get_zone_info": false, 00:08:40.877 "zone_management": false, 00:08:40.877 "zone_append": false, 00:08:40.877 "compare": false, 00:08:40.877 "compare_and_write": false, 00:08:40.877 "abort": true, 00:08:40.877 "seek_hole": false, 00:08:40.877 "seek_data": false, 00:08:40.877 "copy": true, 00:08:40.877 "nvme_iov_md": false 00:08:40.877 }, 00:08:40.877 "memory_domains": [ 00:08:40.877 { 00:08:40.877 "dma_device_id": "system", 00:08:40.877 "dma_device_type": 1 00:08:40.877 }, 00:08:40.877 { 00:08:40.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.877 "dma_device_type": 2 00:08:40.877 } 00:08:40.877 ], 00:08:40.877 "driver_specific": {} 00:08:40.877 } 00:08:40.877 ] 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.877 "name": "Existed_Raid", 00:08:40.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.877 "strip_size_kb": 64, 00:08:40.877 "state": "configuring", 00:08:40.877 "raid_level": "raid0", 00:08:40.877 "superblock": false, 00:08:40.877 "num_base_bdevs": 3, 00:08:40.877 "num_base_bdevs_discovered": 1, 00:08:40.877 "num_base_bdevs_operational": 3, 00:08:40.877 "base_bdevs_list": [ 00:08:40.877 { 00:08:40.877 "name": "BaseBdev1", 00:08:40.877 "uuid": "5688f666-b4db-42c3-85e8-11c9cbaa5cda", 00:08:40.877 "is_configured": true, 00:08:40.877 "data_offset": 0, 00:08:40.877 "data_size": 65536 00:08:40.877 }, 00:08:40.877 { 00:08:40.877 "name": "BaseBdev2", 00:08:40.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.877 "is_configured": false, 00:08:40.877 "data_offset": 0, 00:08:40.877 "data_size": 0 00:08:40.877 }, 00:08:40.877 { 00:08:40.877 "name": "BaseBdev3", 00:08:40.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.877 "is_configured": false, 00:08:40.877 "data_offset": 0, 00:08:40.877 "data_size": 0 00:08:40.877 } 00:08:40.877 ] 00:08:40.877 }' 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.877 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.447 [2024-11-20 09:21:06.743665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.447 [2024-11-20 09:21:06.743741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.447 [2024-11-20 09:21:06.751711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.447 [2024-11-20 09:21:06.754145] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.447 [2024-11-20 09:21:06.754277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.447 [2024-11-20 09:21:06.754294] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:41.447 [2024-11-20 09:21:06.754306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.447 "name": "Existed_Raid", 00:08:41.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.447 "strip_size_kb": 64, 00:08:41.447 "state": "configuring", 00:08:41.447 "raid_level": "raid0", 00:08:41.447 "superblock": false, 00:08:41.447 "num_base_bdevs": 3, 00:08:41.447 "num_base_bdevs_discovered": 1, 00:08:41.447 "num_base_bdevs_operational": 3, 00:08:41.447 "base_bdevs_list": [ 00:08:41.447 { 00:08:41.447 "name": "BaseBdev1", 00:08:41.447 "uuid": "5688f666-b4db-42c3-85e8-11c9cbaa5cda", 00:08:41.447 "is_configured": true, 00:08:41.447 "data_offset": 0, 00:08:41.447 "data_size": 65536 00:08:41.447 }, 00:08:41.447 { 00:08:41.447 "name": "BaseBdev2", 00:08:41.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.447 "is_configured": false, 00:08:41.447 "data_offset": 0, 00:08:41.447 "data_size": 0 00:08:41.447 }, 00:08:41.447 { 00:08:41.447 "name": "BaseBdev3", 00:08:41.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.447 "is_configured": false, 00:08:41.447 "data_offset": 0, 00:08:41.447 "data_size": 0 00:08:41.447 } 00:08:41.447 ] 00:08:41.447 }' 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.447 09:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.017 [2024-11-20 09:21:07.277913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.017 BaseBdev2 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.017 [ 00:08:42.017 { 00:08:42.017 "name": "BaseBdev2", 00:08:42.017 "aliases": [ 00:08:42.017 "e155ae6b-fd86-474c-a849-16fad274202c" 00:08:42.017 ], 00:08:42.017 "product_name": "Malloc disk", 00:08:42.017 "block_size": 512, 00:08:42.017 "num_blocks": 65536, 00:08:42.017 "uuid": "e155ae6b-fd86-474c-a849-16fad274202c", 00:08:42.017 "assigned_rate_limits": { 00:08:42.017 "rw_ios_per_sec": 0, 00:08:42.017 "rw_mbytes_per_sec": 0, 00:08:42.017 "r_mbytes_per_sec": 0, 00:08:42.017 "w_mbytes_per_sec": 0 00:08:42.017 }, 00:08:42.017 "claimed": true, 00:08:42.017 "claim_type": "exclusive_write", 00:08:42.017 "zoned": false, 00:08:42.017 "supported_io_types": { 00:08:42.017 "read": true, 00:08:42.017 "write": true, 00:08:42.017 "unmap": true, 00:08:42.017 "flush": true, 00:08:42.017 "reset": true, 00:08:42.017 "nvme_admin": false, 00:08:42.017 "nvme_io": false, 00:08:42.017 "nvme_io_md": false, 00:08:42.017 "write_zeroes": true, 00:08:42.017 "zcopy": true, 00:08:42.017 "get_zone_info": false, 00:08:42.017 "zone_management": false, 00:08:42.017 "zone_append": false, 00:08:42.017 "compare": false, 00:08:42.017 "compare_and_write": false, 00:08:42.017 "abort": true, 00:08:42.017 "seek_hole": false, 00:08:42.017 "seek_data": false, 00:08:42.017 "copy": true, 00:08:42.017 "nvme_iov_md": false 00:08:42.017 }, 00:08:42.017 "memory_domains": [ 00:08:42.017 { 00:08:42.017 "dma_device_id": "system", 00:08:42.017 "dma_device_type": 1 00:08:42.017 }, 00:08:42.017 { 00:08:42.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.017 "dma_device_type": 2 00:08:42.017 } 00:08:42.017 ], 00:08:42.017 "driver_specific": {} 00:08:42.017 } 00:08:42.017 ] 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.017 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.017 "name": "Existed_Raid", 00:08:42.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.017 "strip_size_kb": 64, 00:08:42.017 "state": "configuring", 00:08:42.017 "raid_level": "raid0", 00:08:42.017 "superblock": false, 00:08:42.017 "num_base_bdevs": 3, 00:08:42.017 "num_base_bdevs_discovered": 2, 00:08:42.017 "num_base_bdevs_operational": 3, 00:08:42.017 "base_bdevs_list": [ 00:08:42.017 { 00:08:42.017 "name": "BaseBdev1", 00:08:42.017 "uuid": "5688f666-b4db-42c3-85e8-11c9cbaa5cda", 00:08:42.017 "is_configured": true, 00:08:42.017 "data_offset": 0, 00:08:42.017 "data_size": 65536 00:08:42.017 }, 00:08:42.018 { 00:08:42.018 "name": "BaseBdev2", 00:08:42.018 "uuid": "e155ae6b-fd86-474c-a849-16fad274202c", 00:08:42.018 "is_configured": true, 00:08:42.018 "data_offset": 0, 00:08:42.018 "data_size": 65536 00:08:42.018 }, 00:08:42.018 { 00:08:42.018 "name": "BaseBdev3", 00:08:42.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.018 "is_configured": false, 00:08:42.018 "data_offset": 0, 00:08:42.018 "data_size": 0 00:08:42.018 } 00:08:42.018 ] 00:08:42.018 }' 00:08:42.018 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.018 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.589 [2024-11-20 09:21:07.880970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.589 [2024-11-20 09:21:07.881041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:42.589 [2024-11-20 09:21:07.881060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:42.589 [2024-11-20 09:21:07.881420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:42.589 [2024-11-20 09:21:07.881671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:42.589 [2024-11-20 09:21:07.881683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:42.589 [2024-11-20 09:21:07.882055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.589 BaseBdev3 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.589 [ 00:08:42.589 { 00:08:42.589 "name": "BaseBdev3", 00:08:42.589 "aliases": [ 00:08:42.589 "630b962f-6ead-4dd1-9fe3-827ded8db942" 00:08:42.589 ], 00:08:42.589 "product_name": "Malloc disk", 00:08:42.589 "block_size": 512, 00:08:42.589 "num_blocks": 65536, 00:08:42.589 "uuid": "630b962f-6ead-4dd1-9fe3-827ded8db942", 00:08:42.589 "assigned_rate_limits": { 00:08:42.589 "rw_ios_per_sec": 0, 00:08:42.589 "rw_mbytes_per_sec": 0, 00:08:42.589 "r_mbytes_per_sec": 0, 00:08:42.589 "w_mbytes_per_sec": 0 00:08:42.589 }, 00:08:42.589 "claimed": true, 00:08:42.589 "claim_type": "exclusive_write", 00:08:42.589 "zoned": false, 00:08:42.589 "supported_io_types": { 00:08:42.589 "read": true, 00:08:42.589 "write": true, 00:08:42.589 "unmap": true, 00:08:42.589 "flush": true, 00:08:42.589 "reset": true, 00:08:42.589 "nvme_admin": false, 00:08:42.589 "nvme_io": false, 00:08:42.589 "nvme_io_md": false, 00:08:42.589 "write_zeroes": true, 00:08:42.589 "zcopy": true, 00:08:42.589 "get_zone_info": false, 00:08:42.589 "zone_management": false, 00:08:42.589 "zone_append": false, 00:08:42.589 "compare": false, 00:08:42.589 "compare_and_write": false, 00:08:42.589 "abort": true, 00:08:42.589 "seek_hole": false, 00:08:42.589 "seek_data": false, 00:08:42.589 "copy": true, 00:08:42.589 "nvme_iov_md": false 00:08:42.589 }, 00:08:42.589 "memory_domains": [ 00:08:42.589 { 00:08:42.589 "dma_device_id": "system", 00:08:42.589 "dma_device_type": 1 00:08:42.589 }, 00:08:42.589 { 00:08:42.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.589 "dma_device_type": 2 00:08:42.589 } 00:08:42.589 ], 00:08:42.589 "driver_specific": {} 00:08:42.589 } 00:08:42.589 ] 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.589 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.590 "name": "Existed_Raid", 00:08:42.590 "uuid": "c6a3a056-680f-4e7b-a773-6c9b55a7aa77", 00:08:42.590 "strip_size_kb": 64, 00:08:42.590 "state": "online", 00:08:42.590 "raid_level": "raid0", 00:08:42.590 "superblock": false, 00:08:42.590 "num_base_bdevs": 3, 00:08:42.590 "num_base_bdevs_discovered": 3, 00:08:42.590 "num_base_bdevs_operational": 3, 00:08:42.590 "base_bdevs_list": [ 00:08:42.590 { 00:08:42.590 "name": "BaseBdev1", 00:08:42.590 "uuid": "5688f666-b4db-42c3-85e8-11c9cbaa5cda", 00:08:42.590 "is_configured": true, 00:08:42.590 "data_offset": 0, 00:08:42.590 "data_size": 65536 00:08:42.590 }, 00:08:42.590 { 00:08:42.590 "name": "BaseBdev2", 00:08:42.590 "uuid": "e155ae6b-fd86-474c-a849-16fad274202c", 00:08:42.590 "is_configured": true, 00:08:42.590 "data_offset": 0, 00:08:42.590 "data_size": 65536 00:08:42.590 }, 00:08:42.590 { 00:08:42.590 "name": "BaseBdev3", 00:08:42.590 "uuid": "630b962f-6ead-4dd1-9fe3-827ded8db942", 00:08:42.590 "is_configured": true, 00:08:42.590 "data_offset": 0, 00:08:42.590 "data_size": 65536 00:08:42.590 } 00:08:42.590 ] 00:08:42.590 }' 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.590 09:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.158 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:43.158 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.159 [2024-11-20 09:21:08.392740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.159 "name": "Existed_Raid", 00:08:43.159 "aliases": [ 00:08:43.159 "c6a3a056-680f-4e7b-a773-6c9b55a7aa77" 00:08:43.159 ], 00:08:43.159 "product_name": "Raid Volume", 00:08:43.159 "block_size": 512, 00:08:43.159 "num_blocks": 196608, 00:08:43.159 "uuid": "c6a3a056-680f-4e7b-a773-6c9b55a7aa77", 00:08:43.159 "assigned_rate_limits": { 00:08:43.159 "rw_ios_per_sec": 0, 00:08:43.159 "rw_mbytes_per_sec": 0, 00:08:43.159 "r_mbytes_per_sec": 0, 00:08:43.159 "w_mbytes_per_sec": 0 00:08:43.159 }, 00:08:43.159 "claimed": false, 00:08:43.159 "zoned": false, 00:08:43.159 "supported_io_types": { 00:08:43.159 "read": true, 00:08:43.159 "write": true, 00:08:43.159 "unmap": true, 00:08:43.159 "flush": true, 00:08:43.159 "reset": true, 00:08:43.159 "nvme_admin": false, 00:08:43.159 "nvme_io": false, 00:08:43.159 "nvme_io_md": false, 00:08:43.159 "write_zeroes": true, 00:08:43.159 "zcopy": false, 00:08:43.159 "get_zone_info": false, 00:08:43.159 "zone_management": false, 00:08:43.159 "zone_append": false, 00:08:43.159 "compare": false, 00:08:43.159 "compare_and_write": false, 00:08:43.159 "abort": false, 00:08:43.159 "seek_hole": false, 00:08:43.159 "seek_data": false, 00:08:43.159 "copy": false, 00:08:43.159 "nvme_iov_md": false 00:08:43.159 }, 00:08:43.159 "memory_domains": [ 00:08:43.159 { 00:08:43.159 "dma_device_id": "system", 00:08:43.159 "dma_device_type": 1 00:08:43.159 }, 00:08:43.159 { 00:08:43.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.159 "dma_device_type": 2 00:08:43.159 }, 00:08:43.159 { 00:08:43.159 "dma_device_id": "system", 00:08:43.159 "dma_device_type": 1 00:08:43.159 }, 00:08:43.159 { 00:08:43.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.159 "dma_device_type": 2 00:08:43.159 }, 00:08:43.159 { 00:08:43.159 "dma_device_id": "system", 00:08:43.159 "dma_device_type": 1 00:08:43.159 }, 00:08:43.159 { 00:08:43.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.159 "dma_device_type": 2 00:08:43.159 } 00:08:43.159 ], 00:08:43.159 "driver_specific": { 00:08:43.159 "raid": { 00:08:43.159 "uuid": "c6a3a056-680f-4e7b-a773-6c9b55a7aa77", 00:08:43.159 "strip_size_kb": 64, 00:08:43.159 "state": "online", 00:08:43.159 "raid_level": "raid0", 00:08:43.159 "superblock": false, 00:08:43.159 "num_base_bdevs": 3, 00:08:43.159 "num_base_bdevs_discovered": 3, 00:08:43.159 "num_base_bdevs_operational": 3, 00:08:43.159 "base_bdevs_list": [ 00:08:43.159 { 00:08:43.159 "name": "BaseBdev1", 00:08:43.159 "uuid": "5688f666-b4db-42c3-85e8-11c9cbaa5cda", 00:08:43.159 "is_configured": true, 00:08:43.159 "data_offset": 0, 00:08:43.159 "data_size": 65536 00:08:43.159 }, 00:08:43.159 { 00:08:43.159 "name": "BaseBdev2", 00:08:43.159 "uuid": "e155ae6b-fd86-474c-a849-16fad274202c", 00:08:43.159 "is_configured": true, 00:08:43.159 "data_offset": 0, 00:08:43.159 "data_size": 65536 00:08:43.159 }, 00:08:43.159 { 00:08:43.159 "name": "BaseBdev3", 00:08:43.159 "uuid": "630b962f-6ead-4dd1-9fe3-827ded8db942", 00:08:43.159 "is_configured": true, 00:08:43.159 "data_offset": 0, 00:08:43.159 "data_size": 65536 00:08:43.159 } 00:08:43.159 ] 00:08:43.159 } 00:08:43.159 } 00:08:43.159 }' 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:43.159 BaseBdev2 00:08:43.159 BaseBdev3' 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.159 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.419 [2024-11-20 09:21:08.671980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.419 [2024-11-20 09:21:08.672108] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.419 [2024-11-20 09:21:08.672199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.419 "name": "Existed_Raid", 00:08:43.419 "uuid": "c6a3a056-680f-4e7b-a773-6c9b55a7aa77", 00:08:43.419 "strip_size_kb": 64, 00:08:43.419 "state": "offline", 00:08:43.419 "raid_level": "raid0", 00:08:43.419 "superblock": false, 00:08:43.419 "num_base_bdevs": 3, 00:08:43.419 "num_base_bdevs_discovered": 2, 00:08:43.419 "num_base_bdevs_operational": 2, 00:08:43.419 "base_bdevs_list": [ 00:08:43.419 { 00:08:43.419 "name": null, 00:08:43.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.419 "is_configured": false, 00:08:43.419 "data_offset": 0, 00:08:43.419 "data_size": 65536 00:08:43.419 }, 00:08:43.419 { 00:08:43.419 "name": "BaseBdev2", 00:08:43.419 "uuid": "e155ae6b-fd86-474c-a849-16fad274202c", 00:08:43.419 "is_configured": true, 00:08:43.419 "data_offset": 0, 00:08:43.419 "data_size": 65536 00:08:43.419 }, 00:08:43.419 { 00:08:43.419 "name": "BaseBdev3", 00:08:43.419 "uuid": "630b962f-6ead-4dd1-9fe3-827ded8db942", 00:08:43.419 "is_configured": true, 00:08:43.419 "data_offset": 0, 00:08:43.419 "data_size": 65536 00:08:43.419 } 00:08:43.419 ] 00:08:43.419 }' 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.419 09:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.989 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.989 [2024-11-20 09:21:09.317479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 [2024-11-20 09:21:09.504121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:44.248 [2024-11-20 09:21:09.504212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.248 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.508 BaseBdev2 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.508 [ 00:08:44.508 { 00:08:44.508 "name": "BaseBdev2", 00:08:44.508 "aliases": [ 00:08:44.508 "176d615d-aebf-4da8-ab19-e2e3bf35febe" 00:08:44.508 ], 00:08:44.508 "product_name": "Malloc disk", 00:08:44.508 "block_size": 512, 00:08:44.508 "num_blocks": 65536, 00:08:44.508 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:44.508 "assigned_rate_limits": { 00:08:44.508 "rw_ios_per_sec": 0, 00:08:44.508 "rw_mbytes_per_sec": 0, 00:08:44.508 "r_mbytes_per_sec": 0, 00:08:44.508 "w_mbytes_per_sec": 0 00:08:44.508 }, 00:08:44.508 "claimed": false, 00:08:44.508 "zoned": false, 00:08:44.508 "supported_io_types": { 00:08:44.508 "read": true, 00:08:44.508 "write": true, 00:08:44.508 "unmap": true, 00:08:44.508 "flush": true, 00:08:44.508 "reset": true, 00:08:44.508 "nvme_admin": false, 00:08:44.508 "nvme_io": false, 00:08:44.508 "nvme_io_md": false, 00:08:44.508 "write_zeroes": true, 00:08:44.508 "zcopy": true, 00:08:44.508 "get_zone_info": false, 00:08:44.508 "zone_management": false, 00:08:44.508 "zone_append": false, 00:08:44.508 "compare": false, 00:08:44.508 "compare_and_write": false, 00:08:44.508 "abort": true, 00:08:44.508 "seek_hole": false, 00:08:44.508 "seek_data": false, 00:08:44.508 "copy": true, 00:08:44.508 "nvme_iov_md": false 00:08:44.508 }, 00:08:44.508 "memory_domains": [ 00:08:44.508 { 00:08:44.508 "dma_device_id": "system", 00:08:44.508 "dma_device_type": 1 00:08:44.508 }, 00:08:44.508 { 00:08:44.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.508 "dma_device_type": 2 00:08:44.508 } 00:08:44.508 ], 00:08:44.508 "driver_specific": {} 00:08:44.508 } 00:08:44.508 ] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.508 BaseBdev3 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.508 [ 00:08:44.508 { 00:08:44.508 "name": "BaseBdev3", 00:08:44.508 "aliases": [ 00:08:44.508 "6784d42d-5916-4244-9d56-5919b9d8ed90" 00:08:44.508 ], 00:08:44.508 "product_name": "Malloc disk", 00:08:44.508 "block_size": 512, 00:08:44.508 "num_blocks": 65536, 00:08:44.508 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:44.508 "assigned_rate_limits": { 00:08:44.508 "rw_ios_per_sec": 0, 00:08:44.508 "rw_mbytes_per_sec": 0, 00:08:44.508 "r_mbytes_per_sec": 0, 00:08:44.508 "w_mbytes_per_sec": 0 00:08:44.508 }, 00:08:44.508 "claimed": false, 00:08:44.508 "zoned": false, 00:08:44.508 "supported_io_types": { 00:08:44.508 "read": true, 00:08:44.508 "write": true, 00:08:44.508 "unmap": true, 00:08:44.508 "flush": true, 00:08:44.508 "reset": true, 00:08:44.508 "nvme_admin": false, 00:08:44.508 "nvme_io": false, 00:08:44.508 "nvme_io_md": false, 00:08:44.508 "write_zeroes": true, 00:08:44.508 "zcopy": true, 00:08:44.508 "get_zone_info": false, 00:08:44.508 "zone_management": false, 00:08:44.508 "zone_append": false, 00:08:44.508 "compare": false, 00:08:44.508 "compare_and_write": false, 00:08:44.508 "abort": true, 00:08:44.508 "seek_hole": false, 00:08:44.508 "seek_data": false, 00:08:44.508 "copy": true, 00:08:44.508 "nvme_iov_md": false 00:08:44.508 }, 00:08:44.508 "memory_domains": [ 00:08:44.508 { 00:08:44.508 "dma_device_id": "system", 00:08:44.508 "dma_device_type": 1 00:08:44.508 }, 00:08:44.508 { 00:08:44.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.508 "dma_device_type": 2 00:08:44.508 } 00:08:44.508 ], 00:08:44.508 "driver_specific": {} 00:08:44.508 } 00:08:44.508 ] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.508 [2024-11-20 09:21:09.856774] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.508 [2024-11-20 09:21:09.856955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.508 [2024-11-20 09:21:09.857030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.508 [2024-11-20 09:21:09.859764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.508 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.509 "name": "Existed_Raid", 00:08:44.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.509 "strip_size_kb": 64, 00:08:44.509 "state": "configuring", 00:08:44.509 "raid_level": "raid0", 00:08:44.509 "superblock": false, 00:08:44.509 "num_base_bdevs": 3, 00:08:44.509 "num_base_bdevs_discovered": 2, 00:08:44.509 "num_base_bdevs_operational": 3, 00:08:44.509 "base_bdevs_list": [ 00:08:44.509 { 00:08:44.509 "name": "BaseBdev1", 00:08:44.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.509 "is_configured": false, 00:08:44.509 "data_offset": 0, 00:08:44.509 "data_size": 0 00:08:44.509 }, 00:08:44.509 { 00:08:44.509 "name": "BaseBdev2", 00:08:44.509 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:44.509 "is_configured": true, 00:08:44.509 "data_offset": 0, 00:08:44.509 "data_size": 65536 00:08:44.509 }, 00:08:44.509 { 00:08:44.509 "name": "BaseBdev3", 00:08:44.509 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:44.509 "is_configured": true, 00:08:44.509 "data_offset": 0, 00:08:44.509 "data_size": 65536 00:08:44.509 } 00:08:44.509 ] 00:08:44.509 }' 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.509 09:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.078 [2024-11-20 09:21:10.339963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.078 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.079 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.079 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.079 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.079 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.079 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.079 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.079 "name": "Existed_Raid", 00:08:45.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.079 "strip_size_kb": 64, 00:08:45.079 "state": "configuring", 00:08:45.079 "raid_level": "raid0", 00:08:45.079 "superblock": false, 00:08:45.079 "num_base_bdevs": 3, 00:08:45.079 "num_base_bdevs_discovered": 1, 00:08:45.079 "num_base_bdevs_operational": 3, 00:08:45.079 "base_bdevs_list": [ 00:08:45.079 { 00:08:45.079 "name": "BaseBdev1", 00:08:45.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.079 "is_configured": false, 00:08:45.079 "data_offset": 0, 00:08:45.079 "data_size": 0 00:08:45.079 }, 00:08:45.079 { 00:08:45.079 "name": null, 00:08:45.079 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:45.079 "is_configured": false, 00:08:45.079 "data_offset": 0, 00:08:45.079 "data_size": 65536 00:08:45.079 }, 00:08:45.079 { 00:08:45.079 "name": "BaseBdev3", 00:08:45.079 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:45.079 "is_configured": true, 00:08:45.079 "data_offset": 0, 00:08:45.079 "data_size": 65536 00:08:45.079 } 00:08:45.079 ] 00:08:45.079 }' 00:08:45.079 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.079 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.647 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.647 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:45.647 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.647 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.647 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.647 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:45.647 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.647 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.647 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.648 [2024-11-20 09:21:10.939700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.648 BaseBdev1 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.648 [ 00:08:45.648 { 00:08:45.648 "name": "BaseBdev1", 00:08:45.648 "aliases": [ 00:08:45.648 "0ee15254-eee8-4157-afdf-4f4b211ce64e" 00:08:45.648 ], 00:08:45.648 "product_name": "Malloc disk", 00:08:45.648 "block_size": 512, 00:08:45.648 "num_blocks": 65536, 00:08:45.648 "uuid": "0ee15254-eee8-4157-afdf-4f4b211ce64e", 00:08:45.648 "assigned_rate_limits": { 00:08:45.648 "rw_ios_per_sec": 0, 00:08:45.648 "rw_mbytes_per_sec": 0, 00:08:45.648 "r_mbytes_per_sec": 0, 00:08:45.648 "w_mbytes_per_sec": 0 00:08:45.648 }, 00:08:45.648 "claimed": true, 00:08:45.648 "claim_type": "exclusive_write", 00:08:45.648 "zoned": false, 00:08:45.648 "supported_io_types": { 00:08:45.648 "read": true, 00:08:45.648 "write": true, 00:08:45.648 "unmap": true, 00:08:45.648 "flush": true, 00:08:45.648 "reset": true, 00:08:45.648 "nvme_admin": false, 00:08:45.648 "nvme_io": false, 00:08:45.648 "nvme_io_md": false, 00:08:45.648 "write_zeroes": true, 00:08:45.648 "zcopy": true, 00:08:45.648 "get_zone_info": false, 00:08:45.648 "zone_management": false, 00:08:45.648 "zone_append": false, 00:08:45.648 "compare": false, 00:08:45.648 "compare_and_write": false, 00:08:45.648 "abort": true, 00:08:45.648 "seek_hole": false, 00:08:45.648 "seek_data": false, 00:08:45.648 "copy": true, 00:08:45.648 "nvme_iov_md": false 00:08:45.648 }, 00:08:45.648 "memory_domains": [ 00:08:45.648 { 00:08:45.648 "dma_device_id": "system", 00:08:45.648 "dma_device_type": 1 00:08:45.648 }, 00:08:45.648 { 00:08:45.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.648 "dma_device_type": 2 00:08:45.648 } 00:08:45.648 ], 00:08:45.648 "driver_specific": {} 00:08:45.648 } 00:08:45.648 ] 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.648 09:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.648 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.648 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.648 "name": "Existed_Raid", 00:08:45.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.648 "strip_size_kb": 64, 00:08:45.648 "state": "configuring", 00:08:45.648 "raid_level": "raid0", 00:08:45.648 "superblock": false, 00:08:45.648 "num_base_bdevs": 3, 00:08:45.648 "num_base_bdevs_discovered": 2, 00:08:45.648 "num_base_bdevs_operational": 3, 00:08:45.648 "base_bdevs_list": [ 00:08:45.648 { 00:08:45.648 "name": "BaseBdev1", 00:08:45.648 "uuid": "0ee15254-eee8-4157-afdf-4f4b211ce64e", 00:08:45.648 "is_configured": true, 00:08:45.648 "data_offset": 0, 00:08:45.648 "data_size": 65536 00:08:45.648 }, 00:08:45.648 { 00:08:45.648 "name": null, 00:08:45.648 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:45.648 "is_configured": false, 00:08:45.648 "data_offset": 0, 00:08:45.648 "data_size": 65536 00:08:45.648 }, 00:08:45.648 { 00:08:45.648 "name": "BaseBdev3", 00:08:45.648 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:45.648 "is_configured": true, 00:08:45.648 "data_offset": 0, 00:08:45.648 "data_size": 65536 00:08:45.648 } 00:08:45.648 ] 00:08:45.648 }' 00:08:45.648 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.648 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.215 [2024-11-20 09:21:11.510889] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.215 "name": "Existed_Raid", 00:08:46.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.215 "strip_size_kb": 64, 00:08:46.215 "state": "configuring", 00:08:46.215 "raid_level": "raid0", 00:08:46.215 "superblock": false, 00:08:46.215 "num_base_bdevs": 3, 00:08:46.215 "num_base_bdevs_discovered": 1, 00:08:46.215 "num_base_bdevs_operational": 3, 00:08:46.215 "base_bdevs_list": [ 00:08:46.215 { 00:08:46.215 "name": "BaseBdev1", 00:08:46.215 "uuid": "0ee15254-eee8-4157-afdf-4f4b211ce64e", 00:08:46.215 "is_configured": true, 00:08:46.215 "data_offset": 0, 00:08:46.215 "data_size": 65536 00:08:46.215 }, 00:08:46.215 { 00:08:46.215 "name": null, 00:08:46.215 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:46.215 "is_configured": false, 00:08:46.215 "data_offset": 0, 00:08:46.215 "data_size": 65536 00:08:46.215 }, 00:08:46.215 { 00:08:46.215 "name": null, 00:08:46.215 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:46.215 "is_configured": false, 00:08:46.215 "data_offset": 0, 00:08:46.215 "data_size": 65536 00:08:46.215 } 00:08:46.215 ] 00:08:46.215 }' 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.215 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.783 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.783 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.783 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.783 09:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:46.783 09:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.783 [2024-11-20 09:21:12.034062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.783 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.783 "name": "Existed_Raid", 00:08:46.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.783 "strip_size_kb": 64, 00:08:46.783 "state": "configuring", 00:08:46.783 "raid_level": "raid0", 00:08:46.783 "superblock": false, 00:08:46.783 "num_base_bdevs": 3, 00:08:46.783 "num_base_bdevs_discovered": 2, 00:08:46.783 "num_base_bdevs_operational": 3, 00:08:46.783 "base_bdevs_list": [ 00:08:46.783 { 00:08:46.783 "name": "BaseBdev1", 00:08:46.783 "uuid": "0ee15254-eee8-4157-afdf-4f4b211ce64e", 00:08:46.783 "is_configured": true, 00:08:46.783 "data_offset": 0, 00:08:46.783 "data_size": 65536 00:08:46.783 }, 00:08:46.783 { 00:08:46.783 "name": null, 00:08:46.783 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:46.783 "is_configured": false, 00:08:46.783 "data_offset": 0, 00:08:46.783 "data_size": 65536 00:08:46.783 }, 00:08:46.783 { 00:08:46.783 "name": "BaseBdev3", 00:08:46.783 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:46.783 "is_configured": true, 00:08:46.783 "data_offset": 0, 00:08:46.784 "data_size": 65536 00:08:46.784 } 00:08:46.784 ] 00:08:46.784 }' 00:08:46.784 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.784 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.043 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.043 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.043 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.302 [2024-11-20 09:21:12.537242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.302 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.302 "name": "Existed_Raid", 00:08:47.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.302 "strip_size_kb": 64, 00:08:47.302 "state": "configuring", 00:08:47.302 "raid_level": "raid0", 00:08:47.302 "superblock": false, 00:08:47.302 "num_base_bdevs": 3, 00:08:47.303 "num_base_bdevs_discovered": 1, 00:08:47.303 "num_base_bdevs_operational": 3, 00:08:47.303 "base_bdevs_list": [ 00:08:47.303 { 00:08:47.303 "name": null, 00:08:47.303 "uuid": "0ee15254-eee8-4157-afdf-4f4b211ce64e", 00:08:47.303 "is_configured": false, 00:08:47.303 "data_offset": 0, 00:08:47.303 "data_size": 65536 00:08:47.303 }, 00:08:47.303 { 00:08:47.303 "name": null, 00:08:47.303 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:47.303 "is_configured": false, 00:08:47.303 "data_offset": 0, 00:08:47.303 "data_size": 65536 00:08:47.303 }, 00:08:47.303 { 00:08:47.303 "name": "BaseBdev3", 00:08:47.303 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:47.303 "is_configured": true, 00:08:47.303 "data_offset": 0, 00:08:47.303 "data_size": 65536 00:08:47.303 } 00:08:47.303 ] 00:08:47.303 }' 00:08:47.303 09:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.303 09:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.871 [2024-11-20 09:21:13.183574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.871 "name": "Existed_Raid", 00:08:47.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.871 "strip_size_kb": 64, 00:08:47.871 "state": "configuring", 00:08:47.871 "raid_level": "raid0", 00:08:47.871 "superblock": false, 00:08:47.871 "num_base_bdevs": 3, 00:08:47.871 "num_base_bdevs_discovered": 2, 00:08:47.871 "num_base_bdevs_operational": 3, 00:08:47.871 "base_bdevs_list": [ 00:08:47.871 { 00:08:47.871 "name": null, 00:08:47.871 "uuid": "0ee15254-eee8-4157-afdf-4f4b211ce64e", 00:08:47.871 "is_configured": false, 00:08:47.871 "data_offset": 0, 00:08:47.871 "data_size": 65536 00:08:47.871 }, 00:08:47.871 { 00:08:47.871 "name": "BaseBdev2", 00:08:47.871 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:47.871 "is_configured": true, 00:08:47.871 "data_offset": 0, 00:08:47.871 "data_size": 65536 00:08:47.871 }, 00:08:47.871 { 00:08:47.871 "name": "BaseBdev3", 00:08:47.871 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:47.871 "is_configured": true, 00:08:47.871 "data_offset": 0, 00:08:47.871 "data_size": 65536 00:08:47.871 } 00:08:47.871 ] 00:08:47.871 }' 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.871 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0ee15254-eee8-4157-afdf-4f4b211ce64e 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.438 [2024-11-20 09:21:13.755105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:48.438 [2024-11-20 09:21:13.755180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:48.438 [2024-11-20 09:21:13.755193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:48.438 [2024-11-20 09:21:13.755570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:48.438 [2024-11-20 09:21:13.755796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:48.438 [2024-11-20 09:21:13.755808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:48.438 [2024-11-20 09:21:13.756157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.438 NewBaseBdev 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.438 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.438 [ 00:08:48.438 { 00:08:48.438 "name": "NewBaseBdev", 00:08:48.438 "aliases": [ 00:08:48.438 "0ee15254-eee8-4157-afdf-4f4b211ce64e" 00:08:48.438 ], 00:08:48.438 "product_name": "Malloc disk", 00:08:48.438 "block_size": 512, 00:08:48.439 "num_blocks": 65536, 00:08:48.439 "uuid": "0ee15254-eee8-4157-afdf-4f4b211ce64e", 00:08:48.439 "assigned_rate_limits": { 00:08:48.439 "rw_ios_per_sec": 0, 00:08:48.439 "rw_mbytes_per_sec": 0, 00:08:48.439 "r_mbytes_per_sec": 0, 00:08:48.439 "w_mbytes_per_sec": 0 00:08:48.439 }, 00:08:48.439 "claimed": true, 00:08:48.439 "claim_type": "exclusive_write", 00:08:48.439 "zoned": false, 00:08:48.439 "supported_io_types": { 00:08:48.439 "read": true, 00:08:48.439 "write": true, 00:08:48.439 "unmap": true, 00:08:48.439 "flush": true, 00:08:48.439 "reset": true, 00:08:48.439 "nvme_admin": false, 00:08:48.439 "nvme_io": false, 00:08:48.439 "nvme_io_md": false, 00:08:48.439 "write_zeroes": true, 00:08:48.439 "zcopy": true, 00:08:48.439 "get_zone_info": false, 00:08:48.439 "zone_management": false, 00:08:48.439 "zone_append": false, 00:08:48.439 "compare": false, 00:08:48.439 "compare_and_write": false, 00:08:48.439 "abort": true, 00:08:48.439 "seek_hole": false, 00:08:48.439 "seek_data": false, 00:08:48.439 "copy": true, 00:08:48.439 "nvme_iov_md": false 00:08:48.439 }, 00:08:48.439 "memory_domains": [ 00:08:48.439 { 00:08:48.439 "dma_device_id": "system", 00:08:48.439 "dma_device_type": 1 00:08:48.439 }, 00:08:48.439 { 00:08:48.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.439 "dma_device_type": 2 00:08:48.439 } 00:08:48.439 ], 00:08:48.439 "driver_specific": {} 00:08:48.439 } 00:08:48.439 ] 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.439 "name": "Existed_Raid", 00:08:48.439 "uuid": "9d539c37-0714-4bbc-ada0-4cf31a537e4c", 00:08:48.439 "strip_size_kb": 64, 00:08:48.439 "state": "online", 00:08:48.439 "raid_level": "raid0", 00:08:48.439 "superblock": false, 00:08:48.439 "num_base_bdevs": 3, 00:08:48.439 "num_base_bdevs_discovered": 3, 00:08:48.439 "num_base_bdevs_operational": 3, 00:08:48.439 "base_bdevs_list": [ 00:08:48.439 { 00:08:48.439 "name": "NewBaseBdev", 00:08:48.439 "uuid": "0ee15254-eee8-4157-afdf-4f4b211ce64e", 00:08:48.439 "is_configured": true, 00:08:48.439 "data_offset": 0, 00:08:48.439 "data_size": 65536 00:08:48.439 }, 00:08:48.439 { 00:08:48.439 "name": "BaseBdev2", 00:08:48.439 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:48.439 "is_configured": true, 00:08:48.439 "data_offset": 0, 00:08:48.439 "data_size": 65536 00:08:48.439 }, 00:08:48.439 { 00:08:48.439 "name": "BaseBdev3", 00:08:48.439 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:48.439 "is_configured": true, 00:08:48.439 "data_offset": 0, 00:08:48.439 "data_size": 65536 00:08:48.439 } 00:08:48.439 ] 00:08:48.439 }' 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.439 09:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.008 [2024-11-20 09:21:14.298678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.008 "name": "Existed_Raid", 00:08:49.008 "aliases": [ 00:08:49.008 "9d539c37-0714-4bbc-ada0-4cf31a537e4c" 00:08:49.008 ], 00:08:49.008 "product_name": "Raid Volume", 00:08:49.008 "block_size": 512, 00:08:49.008 "num_blocks": 196608, 00:08:49.008 "uuid": "9d539c37-0714-4bbc-ada0-4cf31a537e4c", 00:08:49.008 "assigned_rate_limits": { 00:08:49.008 "rw_ios_per_sec": 0, 00:08:49.008 "rw_mbytes_per_sec": 0, 00:08:49.008 "r_mbytes_per_sec": 0, 00:08:49.008 "w_mbytes_per_sec": 0 00:08:49.008 }, 00:08:49.008 "claimed": false, 00:08:49.008 "zoned": false, 00:08:49.008 "supported_io_types": { 00:08:49.008 "read": true, 00:08:49.008 "write": true, 00:08:49.008 "unmap": true, 00:08:49.008 "flush": true, 00:08:49.008 "reset": true, 00:08:49.008 "nvme_admin": false, 00:08:49.008 "nvme_io": false, 00:08:49.008 "nvme_io_md": false, 00:08:49.008 "write_zeroes": true, 00:08:49.008 "zcopy": false, 00:08:49.008 "get_zone_info": false, 00:08:49.008 "zone_management": false, 00:08:49.008 "zone_append": false, 00:08:49.008 "compare": false, 00:08:49.008 "compare_and_write": false, 00:08:49.008 "abort": false, 00:08:49.008 "seek_hole": false, 00:08:49.008 "seek_data": false, 00:08:49.008 "copy": false, 00:08:49.008 "nvme_iov_md": false 00:08:49.008 }, 00:08:49.008 "memory_domains": [ 00:08:49.008 { 00:08:49.008 "dma_device_id": "system", 00:08:49.008 "dma_device_type": 1 00:08:49.008 }, 00:08:49.008 { 00:08:49.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.008 "dma_device_type": 2 00:08:49.008 }, 00:08:49.008 { 00:08:49.008 "dma_device_id": "system", 00:08:49.008 "dma_device_type": 1 00:08:49.008 }, 00:08:49.008 { 00:08:49.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.008 "dma_device_type": 2 00:08:49.008 }, 00:08:49.008 { 00:08:49.008 "dma_device_id": "system", 00:08:49.008 "dma_device_type": 1 00:08:49.008 }, 00:08:49.008 { 00:08:49.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.008 "dma_device_type": 2 00:08:49.008 } 00:08:49.008 ], 00:08:49.008 "driver_specific": { 00:08:49.008 "raid": { 00:08:49.008 "uuid": "9d539c37-0714-4bbc-ada0-4cf31a537e4c", 00:08:49.008 "strip_size_kb": 64, 00:08:49.008 "state": "online", 00:08:49.008 "raid_level": "raid0", 00:08:49.008 "superblock": false, 00:08:49.008 "num_base_bdevs": 3, 00:08:49.008 "num_base_bdevs_discovered": 3, 00:08:49.008 "num_base_bdevs_operational": 3, 00:08:49.008 "base_bdevs_list": [ 00:08:49.008 { 00:08:49.008 "name": "NewBaseBdev", 00:08:49.008 "uuid": "0ee15254-eee8-4157-afdf-4f4b211ce64e", 00:08:49.008 "is_configured": true, 00:08:49.008 "data_offset": 0, 00:08:49.008 "data_size": 65536 00:08:49.008 }, 00:08:49.008 { 00:08:49.008 "name": "BaseBdev2", 00:08:49.008 "uuid": "176d615d-aebf-4da8-ab19-e2e3bf35febe", 00:08:49.008 "is_configured": true, 00:08:49.008 "data_offset": 0, 00:08:49.008 "data_size": 65536 00:08:49.008 }, 00:08:49.008 { 00:08:49.008 "name": "BaseBdev3", 00:08:49.008 "uuid": "6784d42d-5916-4244-9d56-5919b9d8ed90", 00:08:49.008 "is_configured": true, 00:08:49.008 "data_offset": 0, 00:08:49.008 "data_size": 65536 00:08:49.008 } 00:08:49.008 ] 00:08:49.008 } 00:08:49.008 } 00:08:49.008 }' 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:49.008 BaseBdev2 00:08:49.008 BaseBdev3' 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.008 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.268 [2024-11-20 09:21:14.605805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.268 [2024-11-20 09:21:14.605857] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.268 [2024-11-20 09:21:14.605987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.268 [2024-11-20 09:21:14.606063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.268 [2024-11-20 09:21:14.606080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64046 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64046 ']' 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64046 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64046 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64046' 00:08:49.268 killing process with pid 64046 00:08:49.268 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64046 00:08:49.268 [2024-11-20 09:21:14.654008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.269 09:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64046 00:08:49.838 [2024-11-20 09:21:15.052152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.215 ************************************ 00:08:51.215 END TEST raid_state_function_test 00:08:51.215 ************************************ 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:51.215 00:08:51.215 real 0m11.840s 00:08:51.215 user 0m18.337s 00:08:51.215 sys 0m2.276s 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.215 09:21:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:51.215 09:21:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.215 09:21:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.215 09:21:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.215 ************************************ 00:08:51.215 START TEST raid_state_function_test_sb 00:08:51.215 ************************************ 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64683 00:08:51.215 Process raid pid: 64683 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64683' 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64683 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64683 ']' 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.215 09:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.215 [2024-11-20 09:21:16.619805] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:08:51.215 [2024-11-20 09:21:16.619982] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.474 [2024-11-20 09:21:16.805287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.733 [2024-11-20 09:21:16.969998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.992 [2024-11-20 09:21:17.250538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.992 [2024-11-20 09:21:17.250606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.251 [2024-11-20 09:21:17.523995] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.251 [2024-11-20 09:21:17.524081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.251 [2024-11-20 09:21:17.524095] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.251 [2024-11-20 09:21:17.524107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.251 [2024-11-20 09:21:17.524115] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.251 [2024-11-20 09:21:17.524127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.251 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.251 "name": "Existed_Raid", 00:08:52.251 "uuid": "92bc7b21-0aac-41a2-bb83-e386d7d274f3", 00:08:52.251 "strip_size_kb": 64, 00:08:52.251 "state": "configuring", 00:08:52.251 "raid_level": "raid0", 00:08:52.251 "superblock": true, 00:08:52.251 "num_base_bdevs": 3, 00:08:52.251 "num_base_bdevs_discovered": 0, 00:08:52.251 "num_base_bdevs_operational": 3, 00:08:52.251 "base_bdevs_list": [ 00:08:52.251 { 00:08:52.251 "name": "BaseBdev1", 00:08:52.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.251 "is_configured": false, 00:08:52.251 "data_offset": 0, 00:08:52.251 "data_size": 0 00:08:52.251 }, 00:08:52.251 { 00:08:52.251 "name": "BaseBdev2", 00:08:52.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.251 "is_configured": false, 00:08:52.251 "data_offset": 0, 00:08:52.251 "data_size": 0 00:08:52.251 }, 00:08:52.251 { 00:08:52.251 "name": "BaseBdev3", 00:08:52.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.252 "is_configured": false, 00:08:52.252 "data_offset": 0, 00:08:52.252 "data_size": 0 00:08:52.252 } 00:08:52.252 ] 00:08:52.252 }' 00:08:52.252 09:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.252 09:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.819 [2024-11-20 09:21:18.042696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.819 [2024-11-20 09:21:18.042856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.819 [2024-11-20 09:21:18.054709] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.819 [2024-11-20 09:21:18.054866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.819 [2024-11-20 09:21:18.054907] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.819 [2024-11-20 09:21:18.054945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.819 [2024-11-20 09:21:18.054976] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.819 [2024-11-20 09:21:18.055012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.819 [2024-11-20 09:21:18.118064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.819 BaseBdev1 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.819 [ 00:08:52.819 { 00:08:52.819 "name": "BaseBdev1", 00:08:52.819 "aliases": [ 00:08:52.819 "8b96a7fc-0e95-4c3e-ac56-1160b7d3ac66" 00:08:52.819 ], 00:08:52.819 "product_name": "Malloc disk", 00:08:52.819 "block_size": 512, 00:08:52.819 "num_blocks": 65536, 00:08:52.819 "uuid": "8b96a7fc-0e95-4c3e-ac56-1160b7d3ac66", 00:08:52.819 "assigned_rate_limits": { 00:08:52.819 "rw_ios_per_sec": 0, 00:08:52.819 "rw_mbytes_per_sec": 0, 00:08:52.819 "r_mbytes_per_sec": 0, 00:08:52.819 "w_mbytes_per_sec": 0 00:08:52.819 }, 00:08:52.819 "claimed": true, 00:08:52.819 "claim_type": "exclusive_write", 00:08:52.819 "zoned": false, 00:08:52.819 "supported_io_types": { 00:08:52.819 "read": true, 00:08:52.819 "write": true, 00:08:52.819 "unmap": true, 00:08:52.819 "flush": true, 00:08:52.819 "reset": true, 00:08:52.819 "nvme_admin": false, 00:08:52.819 "nvme_io": false, 00:08:52.819 "nvme_io_md": false, 00:08:52.819 "write_zeroes": true, 00:08:52.819 "zcopy": true, 00:08:52.819 "get_zone_info": false, 00:08:52.819 "zone_management": false, 00:08:52.819 "zone_append": false, 00:08:52.819 "compare": false, 00:08:52.819 "compare_and_write": false, 00:08:52.819 "abort": true, 00:08:52.819 "seek_hole": false, 00:08:52.819 "seek_data": false, 00:08:52.819 "copy": true, 00:08:52.819 "nvme_iov_md": false 00:08:52.819 }, 00:08:52.819 "memory_domains": [ 00:08:52.819 { 00:08:52.819 "dma_device_id": "system", 00:08:52.819 "dma_device_type": 1 00:08:52.819 }, 00:08:52.819 { 00:08:52.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.819 "dma_device_type": 2 00:08:52.819 } 00:08:52.819 ], 00:08:52.819 "driver_specific": {} 00:08:52.819 } 00:08:52.819 ] 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.819 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.820 "name": "Existed_Raid", 00:08:52.820 "uuid": "b11864ad-a1b4-4a5d-8d74-47410e2b84b5", 00:08:52.820 "strip_size_kb": 64, 00:08:52.820 "state": "configuring", 00:08:52.820 "raid_level": "raid0", 00:08:52.820 "superblock": true, 00:08:52.820 "num_base_bdevs": 3, 00:08:52.820 "num_base_bdevs_discovered": 1, 00:08:52.820 "num_base_bdevs_operational": 3, 00:08:52.820 "base_bdevs_list": [ 00:08:52.820 { 00:08:52.820 "name": "BaseBdev1", 00:08:52.820 "uuid": "8b96a7fc-0e95-4c3e-ac56-1160b7d3ac66", 00:08:52.820 "is_configured": true, 00:08:52.820 "data_offset": 2048, 00:08:52.820 "data_size": 63488 00:08:52.820 }, 00:08:52.820 { 00:08:52.820 "name": "BaseBdev2", 00:08:52.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.820 "is_configured": false, 00:08:52.820 "data_offset": 0, 00:08:52.820 "data_size": 0 00:08:52.820 }, 00:08:52.820 { 00:08:52.820 "name": "BaseBdev3", 00:08:52.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.820 "is_configured": false, 00:08:52.820 "data_offset": 0, 00:08:52.820 "data_size": 0 00:08:52.820 } 00:08:52.820 ] 00:08:52.820 }' 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.820 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.395 [2024-11-20 09:21:18.645313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.395 [2024-11-20 09:21:18.645411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.395 [2024-11-20 09:21:18.657377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.395 [2024-11-20 09:21:18.660051] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.395 [2024-11-20 09:21:18.660115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.395 [2024-11-20 09:21:18.660129] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.395 [2024-11-20 09:21:18.660140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.395 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.396 "name": "Existed_Raid", 00:08:53.396 "uuid": "dec95e4c-2cb7-416f-8fb2-4e94764b876f", 00:08:53.396 "strip_size_kb": 64, 00:08:53.396 "state": "configuring", 00:08:53.396 "raid_level": "raid0", 00:08:53.396 "superblock": true, 00:08:53.396 "num_base_bdevs": 3, 00:08:53.396 "num_base_bdevs_discovered": 1, 00:08:53.396 "num_base_bdevs_operational": 3, 00:08:53.396 "base_bdevs_list": [ 00:08:53.396 { 00:08:53.396 "name": "BaseBdev1", 00:08:53.396 "uuid": "8b96a7fc-0e95-4c3e-ac56-1160b7d3ac66", 00:08:53.396 "is_configured": true, 00:08:53.396 "data_offset": 2048, 00:08:53.396 "data_size": 63488 00:08:53.396 }, 00:08:53.396 { 00:08:53.396 "name": "BaseBdev2", 00:08:53.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.396 "is_configured": false, 00:08:53.396 "data_offset": 0, 00:08:53.396 "data_size": 0 00:08:53.396 }, 00:08:53.396 { 00:08:53.396 "name": "BaseBdev3", 00:08:53.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.396 "is_configured": false, 00:08:53.396 "data_offset": 0, 00:08:53.396 "data_size": 0 00:08:53.396 } 00:08:53.396 ] 00:08:53.396 }' 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.396 09:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.983 [2024-11-20 09:21:19.199989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.983 BaseBdev2 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.983 [ 00:08:53.983 { 00:08:53.983 "name": "BaseBdev2", 00:08:53.983 "aliases": [ 00:08:53.983 "0cd5257f-baad-4f31-b795-e046012acab8" 00:08:53.983 ], 00:08:53.983 "product_name": "Malloc disk", 00:08:53.983 "block_size": 512, 00:08:53.983 "num_blocks": 65536, 00:08:53.983 "uuid": "0cd5257f-baad-4f31-b795-e046012acab8", 00:08:53.983 "assigned_rate_limits": { 00:08:53.983 "rw_ios_per_sec": 0, 00:08:53.983 "rw_mbytes_per_sec": 0, 00:08:53.983 "r_mbytes_per_sec": 0, 00:08:53.983 "w_mbytes_per_sec": 0 00:08:53.983 }, 00:08:53.983 "claimed": true, 00:08:53.983 "claim_type": "exclusive_write", 00:08:53.983 "zoned": false, 00:08:53.983 "supported_io_types": { 00:08:53.983 "read": true, 00:08:53.983 "write": true, 00:08:53.983 "unmap": true, 00:08:53.983 "flush": true, 00:08:53.983 "reset": true, 00:08:53.983 "nvme_admin": false, 00:08:53.983 "nvme_io": false, 00:08:53.983 "nvme_io_md": false, 00:08:53.983 "write_zeroes": true, 00:08:53.983 "zcopy": true, 00:08:53.983 "get_zone_info": false, 00:08:53.983 "zone_management": false, 00:08:53.983 "zone_append": false, 00:08:53.983 "compare": false, 00:08:53.983 "compare_and_write": false, 00:08:53.983 "abort": true, 00:08:53.983 "seek_hole": false, 00:08:53.983 "seek_data": false, 00:08:53.983 "copy": true, 00:08:53.983 "nvme_iov_md": false 00:08:53.983 }, 00:08:53.983 "memory_domains": [ 00:08:53.983 { 00:08:53.983 "dma_device_id": "system", 00:08:53.983 "dma_device_type": 1 00:08:53.983 }, 00:08:53.983 { 00:08:53.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.983 "dma_device_type": 2 00:08:53.983 } 00:08:53.983 ], 00:08:53.983 "driver_specific": {} 00:08:53.983 } 00:08:53.983 ] 00:08:53.983 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.984 "name": "Existed_Raid", 00:08:53.984 "uuid": "dec95e4c-2cb7-416f-8fb2-4e94764b876f", 00:08:53.984 "strip_size_kb": 64, 00:08:53.984 "state": "configuring", 00:08:53.984 "raid_level": "raid0", 00:08:53.984 "superblock": true, 00:08:53.984 "num_base_bdevs": 3, 00:08:53.984 "num_base_bdevs_discovered": 2, 00:08:53.984 "num_base_bdevs_operational": 3, 00:08:53.984 "base_bdevs_list": [ 00:08:53.984 { 00:08:53.984 "name": "BaseBdev1", 00:08:53.984 "uuid": "8b96a7fc-0e95-4c3e-ac56-1160b7d3ac66", 00:08:53.984 "is_configured": true, 00:08:53.984 "data_offset": 2048, 00:08:53.984 "data_size": 63488 00:08:53.984 }, 00:08:53.984 { 00:08:53.984 "name": "BaseBdev2", 00:08:53.984 "uuid": "0cd5257f-baad-4f31-b795-e046012acab8", 00:08:53.984 "is_configured": true, 00:08:53.984 "data_offset": 2048, 00:08:53.984 "data_size": 63488 00:08:53.984 }, 00:08:53.984 { 00:08:53.984 "name": "BaseBdev3", 00:08:53.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.984 "is_configured": false, 00:08:53.984 "data_offset": 0, 00:08:53.984 "data_size": 0 00:08:53.984 } 00:08:53.984 ] 00:08:53.984 }' 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.984 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.552 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:54.552 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.552 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.552 [2024-11-20 09:21:19.803717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.552 [2024-11-20 09:21:19.804209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:54.552 [2024-11-20 09:21:19.804287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.552 [2024-11-20 09:21:19.804697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:54.552 [2024-11-20 09:21:19.804937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:54.552 [2024-11-20 09:21:19.804984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:54.552 BaseBdev3 00:08:54.552 [2024-11-20 09:21:19.805239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.552 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.552 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:54.552 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:54.552 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.553 [ 00:08:54.553 { 00:08:54.553 "name": "BaseBdev3", 00:08:54.553 "aliases": [ 00:08:54.553 "47f9264f-e0fb-4790-a76d-eda0d63bfa31" 00:08:54.553 ], 00:08:54.553 "product_name": "Malloc disk", 00:08:54.553 "block_size": 512, 00:08:54.553 "num_blocks": 65536, 00:08:54.553 "uuid": "47f9264f-e0fb-4790-a76d-eda0d63bfa31", 00:08:54.553 "assigned_rate_limits": { 00:08:54.553 "rw_ios_per_sec": 0, 00:08:54.553 "rw_mbytes_per_sec": 0, 00:08:54.553 "r_mbytes_per_sec": 0, 00:08:54.553 "w_mbytes_per_sec": 0 00:08:54.553 }, 00:08:54.553 "claimed": true, 00:08:54.553 "claim_type": "exclusive_write", 00:08:54.553 "zoned": false, 00:08:54.553 "supported_io_types": { 00:08:54.553 "read": true, 00:08:54.553 "write": true, 00:08:54.553 "unmap": true, 00:08:54.553 "flush": true, 00:08:54.553 "reset": true, 00:08:54.553 "nvme_admin": false, 00:08:54.553 "nvme_io": false, 00:08:54.553 "nvme_io_md": false, 00:08:54.553 "write_zeroes": true, 00:08:54.553 "zcopy": true, 00:08:54.553 "get_zone_info": false, 00:08:54.553 "zone_management": false, 00:08:54.553 "zone_append": false, 00:08:54.553 "compare": false, 00:08:54.553 "compare_and_write": false, 00:08:54.553 "abort": true, 00:08:54.553 "seek_hole": false, 00:08:54.553 "seek_data": false, 00:08:54.553 "copy": true, 00:08:54.553 "nvme_iov_md": false 00:08:54.553 }, 00:08:54.553 "memory_domains": [ 00:08:54.553 { 00:08:54.553 "dma_device_id": "system", 00:08:54.553 "dma_device_type": 1 00:08:54.553 }, 00:08:54.553 { 00:08:54.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.553 "dma_device_type": 2 00:08:54.553 } 00:08:54.553 ], 00:08:54.553 "driver_specific": {} 00:08:54.553 } 00:08:54.553 ] 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.553 "name": "Existed_Raid", 00:08:54.553 "uuid": "dec95e4c-2cb7-416f-8fb2-4e94764b876f", 00:08:54.553 "strip_size_kb": 64, 00:08:54.553 "state": "online", 00:08:54.553 "raid_level": "raid0", 00:08:54.553 "superblock": true, 00:08:54.553 "num_base_bdevs": 3, 00:08:54.553 "num_base_bdevs_discovered": 3, 00:08:54.553 "num_base_bdevs_operational": 3, 00:08:54.553 "base_bdevs_list": [ 00:08:54.553 { 00:08:54.553 "name": "BaseBdev1", 00:08:54.553 "uuid": "8b96a7fc-0e95-4c3e-ac56-1160b7d3ac66", 00:08:54.553 "is_configured": true, 00:08:54.553 "data_offset": 2048, 00:08:54.553 "data_size": 63488 00:08:54.553 }, 00:08:54.553 { 00:08:54.553 "name": "BaseBdev2", 00:08:54.553 "uuid": "0cd5257f-baad-4f31-b795-e046012acab8", 00:08:54.553 "is_configured": true, 00:08:54.553 "data_offset": 2048, 00:08:54.553 "data_size": 63488 00:08:54.553 }, 00:08:54.553 { 00:08:54.553 "name": "BaseBdev3", 00:08:54.553 "uuid": "47f9264f-e0fb-4790-a76d-eda0d63bfa31", 00:08:54.553 "is_configured": true, 00:08:54.553 "data_offset": 2048, 00:08:54.553 "data_size": 63488 00:08:54.553 } 00:08:54.553 ] 00:08:54.553 }' 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.553 09:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.122 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.122 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.122 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.123 [2024-11-20 09:21:20.355381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.123 "name": "Existed_Raid", 00:08:55.123 "aliases": [ 00:08:55.123 "dec95e4c-2cb7-416f-8fb2-4e94764b876f" 00:08:55.123 ], 00:08:55.123 "product_name": "Raid Volume", 00:08:55.123 "block_size": 512, 00:08:55.123 "num_blocks": 190464, 00:08:55.123 "uuid": "dec95e4c-2cb7-416f-8fb2-4e94764b876f", 00:08:55.123 "assigned_rate_limits": { 00:08:55.123 "rw_ios_per_sec": 0, 00:08:55.123 "rw_mbytes_per_sec": 0, 00:08:55.123 "r_mbytes_per_sec": 0, 00:08:55.123 "w_mbytes_per_sec": 0 00:08:55.123 }, 00:08:55.123 "claimed": false, 00:08:55.123 "zoned": false, 00:08:55.123 "supported_io_types": { 00:08:55.123 "read": true, 00:08:55.123 "write": true, 00:08:55.123 "unmap": true, 00:08:55.123 "flush": true, 00:08:55.123 "reset": true, 00:08:55.123 "nvme_admin": false, 00:08:55.123 "nvme_io": false, 00:08:55.123 "nvme_io_md": false, 00:08:55.123 "write_zeroes": true, 00:08:55.123 "zcopy": false, 00:08:55.123 "get_zone_info": false, 00:08:55.123 "zone_management": false, 00:08:55.123 "zone_append": false, 00:08:55.123 "compare": false, 00:08:55.123 "compare_and_write": false, 00:08:55.123 "abort": false, 00:08:55.123 "seek_hole": false, 00:08:55.123 "seek_data": false, 00:08:55.123 "copy": false, 00:08:55.123 "nvme_iov_md": false 00:08:55.123 }, 00:08:55.123 "memory_domains": [ 00:08:55.123 { 00:08:55.123 "dma_device_id": "system", 00:08:55.123 "dma_device_type": 1 00:08:55.123 }, 00:08:55.123 { 00:08:55.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.123 "dma_device_type": 2 00:08:55.123 }, 00:08:55.123 { 00:08:55.123 "dma_device_id": "system", 00:08:55.123 "dma_device_type": 1 00:08:55.123 }, 00:08:55.123 { 00:08:55.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.123 "dma_device_type": 2 00:08:55.123 }, 00:08:55.123 { 00:08:55.123 "dma_device_id": "system", 00:08:55.123 "dma_device_type": 1 00:08:55.123 }, 00:08:55.123 { 00:08:55.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.123 "dma_device_type": 2 00:08:55.123 } 00:08:55.123 ], 00:08:55.123 "driver_specific": { 00:08:55.123 "raid": { 00:08:55.123 "uuid": "dec95e4c-2cb7-416f-8fb2-4e94764b876f", 00:08:55.123 "strip_size_kb": 64, 00:08:55.123 "state": "online", 00:08:55.123 "raid_level": "raid0", 00:08:55.123 "superblock": true, 00:08:55.123 "num_base_bdevs": 3, 00:08:55.123 "num_base_bdevs_discovered": 3, 00:08:55.123 "num_base_bdevs_operational": 3, 00:08:55.123 "base_bdevs_list": [ 00:08:55.123 { 00:08:55.123 "name": "BaseBdev1", 00:08:55.123 "uuid": "8b96a7fc-0e95-4c3e-ac56-1160b7d3ac66", 00:08:55.123 "is_configured": true, 00:08:55.123 "data_offset": 2048, 00:08:55.123 "data_size": 63488 00:08:55.123 }, 00:08:55.123 { 00:08:55.123 "name": "BaseBdev2", 00:08:55.123 "uuid": "0cd5257f-baad-4f31-b795-e046012acab8", 00:08:55.123 "is_configured": true, 00:08:55.123 "data_offset": 2048, 00:08:55.123 "data_size": 63488 00:08:55.123 }, 00:08:55.123 { 00:08:55.123 "name": "BaseBdev3", 00:08:55.123 "uuid": "47f9264f-e0fb-4790-a76d-eda0d63bfa31", 00:08:55.123 "is_configured": true, 00:08:55.123 "data_offset": 2048, 00:08:55.123 "data_size": 63488 00:08:55.123 } 00:08:55.123 ] 00:08:55.123 } 00:08:55.123 } 00:08:55.123 }' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.123 BaseBdev2 00:08:55.123 BaseBdev3' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.123 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.382 [2024-11-20 09:21:20.610747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.382 [2024-11-20 09:21:20.610800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.382 [2024-11-20 09:21:20.610871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.382 "name": "Existed_Raid", 00:08:55.382 "uuid": "dec95e4c-2cb7-416f-8fb2-4e94764b876f", 00:08:55.382 "strip_size_kb": 64, 00:08:55.382 "state": "offline", 00:08:55.382 "raid_level": "raid0", 00:08:55.382 "superblock": true, 00:08:55.382 "num_base_bdevs": 3, 00:08:55.382 "num_base_bdevs_discovered": 2, 00:08:55.382 "num_base_bdevs_operational": 2, 00:08:55.382 "base_bdevs_list": [ 00:08:55.382 { 00:08:55.382 "name": null, 00:08:55.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.382 "is_configured": false, 00:08:55.382 "data_offset": 0, 00:08:55.382 "data_size": 63488 00:08:55.382 }, 00:08:55.382 { 00:08:55.382 "name": "BaseBdev2", 00:08:55.382 "uuid": "0cd5257f-baad-4f31-b795-e046012acab8", 00:08:55.382 "is_configured": true, 00:08:55.382 "data_offset": 2048, 00:08:55.382 "data_size": 63488 00:08:55.382 }, 00:08:55.382 { 00:08:55.382 "name": "BaseBdev3", 00:08:55.382 "uuid": "47f9264f-e0fb-4790-a76d-eda0d63bfa31", 00:08:55.382 "is_configured": true, 00:08:55.382 "data_offset": 2048, 00:08:55.382 "data_size": 63488 00:08:55.382 } 00:08:55.382 ] 00:08:55.382 }' 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.382 09:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.948 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.948 [2024-11-20 09:21:21.296571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.206 [2024-11-20 09:21:21.483585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:56.206 [2024-11-20 09:21:21.483771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.206 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.465 BaseBdev2 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.465 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.466 [ 00:08:56.466 { 00:08:56.466 "name": "BaseBdev2", 00:08:56.466 "aliases": [ 00:08:56.466 "42dd774e-6c04-4d73-8839-c1528883d2f4" 00:08:56.466 ], 00:08:56.466 "product_name": "Malloc disk", 00:08:56.466 "block_size": 512, 00:08:56.466 "num_blocks": 65536, 00:08:56.466 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:08:56.466 "assigned_rate_limits": { 00:08:56.466 "rw_ios_per_sec": 0, 00:08:56.466 "rw_mbytes_per_sec": 0, 00:08:56.466 "r_mbytes_per_sec": 0, 00:08:56.466 "w_mbytes_per_sec": 0 00:08:56.466 }, 00:08:56.466 "claimed": false, 00:08:56.466 "zoned": false, 00:08:56.466 "supported_io_types": { 00:08:56.466 "read": true, 00:08:56.466 "write": true, 00:08:56.466 "unmap": true, 00:08:56.466 "flush": true, 00:08:56.466 "reset": true, 00:08:56.466 "nvme_admin": false, 00:08:56.466 "nvme_io": false, 00:08:56.466 "nvme_io_md": false, 00:08:56.466 "write_zeroes": true, 00:08:56.466 "zcopy": true, 00:08:56.466 "get_zone_info": false, 00:08:56.466 "zone_management": false, 00:08:56.466 "zone_append": false, 00:08:56.466 "compare": false, 00:08:56.466 "compare_and_write": false, 00:08:56.466 "abort": true, 00:08:56.466 "seek_hole": false, 00:08:56.466 "seek_data": false, 00:08:56.466 "copy": true, 00:08:56.466 "nvme_iov_md": false 00:08:56.466 }, 00:08:56.466 "memory_domains": [ 00:08:56.466 { 00:08:56.466 "dma_device_id": "system", 00:08:56.466 "dma_device_type": 1 00:08:56.466 }, 00:08:56.466 { 00:08:56.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.466 "dma_device_type": 2 00:08:56.466 } 00:08:56.466 ], 00:08:56.466 "driver_specific": {} 00:08:56.466 } 00:08:56.466 ] 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.466 BaseBdev3 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.466 [ 00:08:56.466 { 00:08:56.466 "name": "BaseBdev3", 00:08:56.466 "aliases": [ 00:08:56.466 "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2" 00:08:56.466 ], 00:08:56.466 "product_name": "Malloc disk", 00:08:56.466 "block_size": 512, 00:08:56.466 "num_blocks": 65536, 00:08:56.466 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:08:56.466 "assigned_rate_limits": { 00:08:56.466 "rw_ios_per_sec": 0, 00:08:56.466 "rw_mbytes_per_sec": 0, 00:08:56.466 "r_mbytes_per_sec": 0, 00:08:56.466 "w_mbytes_per_sec": 0 00:08:56.466 }, 00:08:56.466 "claimed": false, 00:08:56.466 "zoned": false, 00:08:56.466 "supported_io_types": { 00:08:56.466 "read": true, 00:08:56.466 "write": true, 00:08:56.466 "unmap": true, 00:08:56.466 "flush": true, 00:08:56.466 "reset": true, 00:08:56.466 "nvme_admin": false, 00:08:56.466 "nvme_io": false, 00:08:56.466 "nvme_io_md": false, 00:08:56.466 "write_zeroes": true, 00:08:56.466 "zcopy": true, 00:08:56.466 "get_zone_info": false, 00:08:56.466 "zone_management": false, 00:08:56.466 "zone_append": false, 00:08:56.466 "compare": false, 00:08:56.466 "compare_and_write": false, 00:08:56.466 "abort": true, 00:08:56.466 "seek_hole": false, 00:08:56.466 "seek_data": false, 00:08:56.466 "copy": true, 00:08:56.466 "nvme_iov_md": false 00:08:56.466 }, 00:08:56.466 "memory_domains": [ 00:08:56.466 { 00:08:56.466 "dma_device_id": "system", 00:08:56.466 "dma_device_type": 1 00:08:56.466 }, 00:08:56.466 { 00:08:56.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.466 "dma_device_type": 2 00:08:56.466 } 00:08:56.466 ], 00:08:56.466 "driver_specific": {} 00:08:56.466 } 00:08:56.466 ] 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.466 [2024-11-20 09:21:21.864845] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.466 [2024-11-20 09:21:21.865021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.466 [2024-11-20 09:21:21.865094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.466 [2024-11-20 09:21:21.867814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.466 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.725 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.725 "name": "Existed_Raid", 00:08:56.725 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:08:56.725 "strip_size_kb": 64, 00:08:56.725 "state": "configuring", 00:08:56.725 "raid_level": "raid0", 00:08:56.725 "superblock": true, 00:08:56.725 "num_base_bdevs": 3, 00:08:56.725 "num_base_bdevs_discovered": 2, 00:08:56.725 "num_base_bdevs_operational": 3, 00:08:56.725 "base_bdevs_list": [ 00:08:56.725 { 00:08:56.725 "name": "BaseBdev1", 00:08:56.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.725 "is_configured": false, 00:08:56.725 "data_offset": 0, 00:08:56.725 "data_size": 0 00:08:56.725 }, 00:08:56.725 { 00:08:56.725 "name": "BaseBdev2", 00:08:56.725 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:08:56.725 "is_configured": true, 00:08:56.725 "data_offset": 2048, 00:08:56.725 "data_size": 63488 00:08:56.725 }, 00:08:56.725 { 00:08:56.725 "name": "BaseBdev3", 00:08:56.725 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:08:56.725 "is_configured": true, 00:08:56.725 "data_offset": 2048, 00:08:56.725 "data_size": 63488 00:08:56.725 } 00:08:56.725 ] 00:08:56.725 }' 00:08:56.725 09:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.725 09:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.984 [2024-11-20 09:21:22.395943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.984 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.985 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.985 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.289 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.289 "name": "Existed_Raid", 00:08:57.289 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:08:57.289 "strip_size_kb": 64, 00:08:57.289 "state": "configuring", 00:08:57.289 "raid_level": "raid0", 00:08:57.289 "superblock": true, 00:08:57.289 "num_base_bdevs": 3, 00:08:57.289 "num_base_bdevs_discovered": 1, 00:08:57.289 "num_base_bdevs_operational": 3, 00:08:57.289 "base_bdevs_list": [ 00:08:57.289 { 00:08:57.289 "name": "BaseBdev1", 00:08:57.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.289 "is_configured": false, 00:08:57.289 "data_offset": 0, 00:08:57.289 "data_size": 0 00:08:57.289 }, 00:08:57.289 { 00:08:57.289 "name": null, 00:08:57.289 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:08:57.289 "is_configured": false, 00:08:57.289 "data_offset": 0, 00:08:57.289 "data_size": 63488 00:08:57.289 }, 00:08:57.289 { 00:08:57.289 "name": "BaseBdev3", 00:08:57.289 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:08:57.289 "is_configured": true, 00:08:57.289 "data_offset": 2048, 00:08:57.289 "data_size": 63488 00:08:57.289 } 00:08:57.289 ] 00:08:57.289 }' 00:08:57.289 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.289 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.551 [2024-11-20 09:21:22.969519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.551 BaseBdev1 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.551 09:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.551 [ 00:08:57.551 { 00:08:57.551 "name": "BaseBdev1", 00:08:57.551 "aliases": [ 00:08:57.551 "93e598f2-4011-4a64-b59f-9b90a5fe0b10" 00:08:57.551 ], 00:08:57.551 "product_name": "Malloc disk", 00:08:57.551 "block_size": 512, 00:08:57.551 "num_blocks": 65536, 00:08:57.551 "uuid": "93e598f2-4011-4a64-b59f-9b90a5fe0b10", 00:08:57.551 "assigned_rate_limits": { 00:08:57.551 "rw_ios_per_sec": 0, 00:08:57.551 "rw_mbytes_per_sec": 0, 00:08:57.551 "r_mbytes_per_sec": 0, 00:08:57.551 "w_mbytes_per_sec": 0 00:08:57.551 }, 00:08:57.551 "claimed": true, 00:08:57.551 "claim_type": "exclusive_write", 00:08:57.551 "zoned": false, 00:08:57.551 "supported_io_types": { 00:08:57.551 "read": true, 00:08:57.551 "write": true, 00:08:57.551 "unmap": true, 00:08:57.551 "flush": true, 00:08:57.551 "reset": true, 00:08:57.551 "nvme_admin": false, 00:08:57.551 "nvme_io": false, 00:08:57.551 "nvme_io_md": false, 00:08:57.551 "write_zeroes": true, 00:08:57.551 "zcopy": true, 00:08:57.551 "get_zone_info": false, 00:08:57.551 "zone_management": false, 00:08:57.551 "zone_append": false, 00:08:57.551 "compare": false, 00:08:57.551 "compare_and_write": false, 00:08:57.551 "abort": true, 00:08:57.811 "seek_hole": false, 00:08:57.811 "seek_data": false, 00:08:57.811 "copy": true, 00:08:57.811 "nvme_iov_md": false 00:08:57.811 }, 00:08:57.811 "memory_domains": [ 00:08:57.811 { 00:08:57.811 "dma_device_id": "system", 00:08:57.811 "dma_device_type": 1 00:08:57.811 }, 00:08:57.811 { 00:08:57.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.811 "dma_device_type": 2 00:08:57.811 } 00:08:57.811 ], 00:08:57.811 "driver_specific": {} 00:08:57.811 } 00:08:57.811 ] 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.811 "name": "Existed_Raid", 00:08:57.811 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:08:57.811 "strip_size_kb": 64, 00:08:57.811 "state": "configuring", 00:08:57.811 "raid_level": "raid0", 00:08:57.811 "superblock": true, 00:08:57.811 "num_base_bdevs": 3, 00:08:57.811 "num_base_bdevs_discovered": 2, 00:08:57.811 "num_base_bdevs_operational": 3, 00:08:57.811 "base_bdevs_list": [ 00:08:57.811 { 00:08:57.811 "name": "BaseBdev1", 00:08:57.811 "uuid": "93e598f2-4011-4a64-b59f-9b90a5fe0b10", 00:08:57.811 "is_configured": true, 00:08:57.811 "data_offset": 2048, 00:08:57.811 "data_size": 63488 00:08:57.811 }, 00:08:57.811 { 00:08:57.811 "name": null, 00:08:57.811 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:08:57.811 "is_configured": false, 00:08:57.811 "data_offset": 0, 00:08:57.811 "data_size": 63488 00:08:57.811 }, 00:08:57.811 { 00:08:57.811 "name": "BaseBdev3", 00:08:57.811 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:08:57.811 "is_configured": true, 00:08:57.811 "data_offset": 2048, 00:08:57.811 "data_size": 63488 00:08:57.811 } 00:08:57.811 ] 00:08:57.811 }' 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.811 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.069 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.069 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.069 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.069 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.069 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.326 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:58.326 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:58.326 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.326 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.326 [2024-11-20 09:21:23.548619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.326 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.326 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.327 "name": "Existed_Raid", 00:08:58.327 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:08:58.327 "strip_size_kb": 64, 00:08:58.327 "state": "configuring", 00:08:58.327 "raid_level": "raid0", 00:08:58.327 "superblock": true, 00:08:58.327 "num_base_bdevs": 3, 00:08:58.327 "num_base_bdevs_discovered": 1, 00:08:58.327 "num_base_bdevs_operational": 3, 00:08:58.327 "base_bdevs_list": [ 00:08:58.327 { 00:08:58.327 "name": "BaseBdev1", 00:08:58.327 "uuid": "93e598f2-4011-4a64-b59f-9b90a5fe0b10", 00:08:58.327 "is_configured": true, 00:08:58.327 "data_offset": 2048, 00:08:58.327 "data_size": 63488 00:08:58.327 }, 00:08:58.327 { 00:08:58.327 "name": null, 00:08:58.327 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:08:58.327 "is_configured": false, 00:08:58.327 "data_offset": 0, 00:08:58.327 "data_size": 63488 00:08:58.327 }, 00:08:58.327 { 00:08:58.327 "name": null, 00:08:58.327 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:08:58.327 "is_configured": false, 00:08:58.327 "data_offset": 0, 00:08:58.327 "data_size": 63488 00:08:58.327 } 00:08:58.327 ] 00:08:58.327 }' 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.327 09:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.584 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.584 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.584 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.584 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.843 [2024-11-20 09:21:24.079861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.843 "name": "Existed_Raid", 00:08:58.843 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:08:58.843 "strip_size_kb": 64, 00:08:58.843 "state": "configuring", 00:08:58.843 "raid_level": "raid0", 00:08:58.843 "superblock": true, 00:08:58.843 "num_base_bdevs": 3, 00:08:58.843 "num_base_bdevs_discovered": 2, 00:08:58.843 "num_base_bdevs_operational": 3, 00:08:58.843 "base_bdevs_list": [ 00:08:58.843 { 00:08:58.843 "name": "BaseBdev1", 00:08:58.843 "uuid": "93e598f2-4011-4a64-b59f-9b90a5fe0b10", 00:08:58.843 "is_configured": true, 00:08:58.843 "data_offset": 2048, 00:08:58.843 "data_size": 63488 00:08:58.843 }, 00:08:58.843 { 00:08:58.843 "name": null, 00:08:58.843 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:08:58.843 "is_configured": false, 00:08:58.843 "data_offset": 0, 00:08:58.843 "data_size": 63488 00:08:58.843 }, 00:08:58.843 { 00:08:58.843 "name": "BaseBdev3", 00:08:58.843 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:08:58.843 "is_configured": true, 00:08:58.843 "data_offset": 2048, 00:08:58.843 "data_size": 63488 00:08:58.843 } 00:08:58.843 ] 00:08:58.843 }' 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.843 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.144 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.144 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.144 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.144 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:59.144 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.405 [2024-11-20 09:21:24.634946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.405 "name": "Existed_Raid", 00:08:59.405 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:08:59.405 "strip_size_kb": 64, 00:08:59.405 "state": "configuring", 00:08:59.405 "raid_level": "raid0", 00:08:59.405 "superblock": true, 00:08:59.405 "num_base_bdevs": 3, 00:08:59.405 "num_base_bdevs_discovered": 1, 00:08:59.405 "num_base_bdevs_operational": 3, 00:08:59.405 "base_bdevs_list": [ 00:08:59.405 { 00:08:59.405 "name": null, 00:08:59.405 "uuid": "93e598f2-4011-4a64-b59f-9b90a5fe0b10", 00:08:59.405 "is_configured": false, 00:08:59.405 "data_offset": 0, 00:08:59.405 "data_size": 63488 00:08:59.405 }, 00:08:59.405 { 00:08:59.405 "name": null, 00:08:59.405 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:08:59.405 "is_configured": false, 00:08:59.405 "data_offset": 0, 00:08:59.405 "data_size": 63488 00:08:59.405 }, 00:08:59.405 { 00:08:59.405 "name": "BaseBdev3", 00:08:59.405 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:08:59.405 "is_configured": true, 00:08:59.405 "data_offset": 2048, 00:08:59.405 "data_size": 63488 00:08:59.405 } 00:08:59.405 ] 00:08:59.405 }' 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.405 09:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.971 [2024-11-20 09:21:25.247335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.971 "name": "Existed_Raid", 00:08:59.971 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:08:59.971 "strip_size_kb": 64, 00:08:59.971 "state": "configuring", 00:08:59.971 "raid_level": "raid0", 00:08:59.971 "superblock": true, 00:08:59.971 "num_base_bdevs": 3, 00:08:59.971 "num_base_bdevs_discovered": 2, 00:08:59.971 "num_base_bdevs_operational": 3, 00:08:59.971 "base_bdevs_list": [ 00:08:59.971 { 00:08:59.971 "name": null, 00:08:59.971 "uuid": "93e598f2-4011-4a64-b59f-9b90a5fe0b10", 00:08:59.971 "is_configured": false, 00:08:59.971 "data_offset": 0, 00:08:59.971 "data_size": 63488 00:08:59.971 }, 00:08:59.971 { 00:08:59.971 "name": "BaseBdev2", 00:08:59.971 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:08:59.971 "is_configured": true, 00:08:59.971 "data_offset": 2048, 00:08:59.971 "data_size": 63488 00:08:59.971 }, 00:08:59.971 { 00:08:59.971 "name": "BaseBdev3", 00:08:59.971 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:08:59.971 "is_configured": true, 00:08:59.971 "data_offset": 2048, 00:08:59.971 "data_size": 63488 00:08:59.971 } 00:08:59.971 ] 00:08:59.971 }' 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.971 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 93e598f2-4011-4a64-b59f-9b90a5fe0b10 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 [2024-11-20 09:21:25.870980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:00.540 [2024-11-20 09:21:25.871300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:00.540 [2024-11-20 09:21:25.871321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.540 [2024-11-20 09:21:25.871703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:00.540 NewBaseBdev 00:09:00.540 [2024-11-20 09:21:25.871891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:00.540 [2024-11-20 09:21:25.871921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:00.540 [2024-11-20 09:21:25.872103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 [ 00:09:00.540 { 00:09:00.540 "name": "NewBaseBdev", 00:09:00.540 "aliases": [ 00:09:00.540 "93e598f2-4011-4a64-b59f-9b90a5fe0b10" 00:09:00.540 ], 00:09:00.540 "product_name": "Malloc disk", 00:09:00.540 "block_size": 512, 00:09:00.540 "num_blocks": 65536, 00:09:00.540 "uuid": "93e598f2-4011-4a64-b59f-9b90a5fe0b10", 00:09:00.540 "assigned_rate_limits": { 00:09:00.540 "rw_ios_per_sec": 0, 00:09:00.540 "rw_mbytes_per_sec": 0, 00:09:00.540 "r_mbytes_per_sec": 0, 00:09:00.540 "w_mbytes_per_sec": 0 00:09:00.540 }, 00:09:00.540 "claimed": true, 00:09:00.540 "claim_type": "exclusive_write", 00:09:00.540 "zoned": false, 00:09:00.540 "supported_io_types": { 00:09:00.540 "read": true, 00:09:00.540 "write": true, 00:09:00.540 "unmap": true, 00:09:00.540 "flush": true, 00:09:00.540 "reset": true, 00:09:00.540 "nvme_admin": false, 00:09:00.540 "nvme_io": false, 00:09:00.540 "nvme_io_md": false, 00:09:00.540 "write_zeroes": true, 00:09:00.540 "zcopy": true, 00:09:00.540 "get_zone_info": false, 00:09:00.540 "zone_management": false, 00:09:00.540 "zone_append": false, 00:09:00.540 "compare": false, 00:09:00.540 "compare_and_write": false, 00:09:00.540 "abort": true, 00:09:00.540 "seek_hole": false, 00:09:00.540 "seek_data": false, 00:09:00.540 "copy": true, 00:09:00.540 "nvme_iov_md": false 00:09:00.540 }, 00:09:00.540 "memory_domains": [ 00:09:00.540 { 00:09:00.540 "dma_device_id": "system", 00:09:00.540 "dma_device_type": 1 00:09:00.540 }, 00:09:00.540 { 00:09:00.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.540 "dma_device_type": 2 00:09:00.540 } 00:09:00.540 ], 00:09:00.540 "driver_specific": {} 00:09:00.540 } 00:09:00.540 ] 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.540 "name": "Existed_Raid", 00:09:00.540 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:09:00.540 "strip_size_kb": 64, 00:09:00.540 "state": "online", 00:09:00.540 "raid_level": "raid0", 00:09:00.540 "superblock": true, 00:09:00.540 "num_base_bdevs": 3, 00:09:00.540 "num_base_bdevs_discovered": 3, 00:09:00.540 "num_base_bdevs_operational": 3, 00:09:00.540 "base_bdevs_list": [ 00:09:00.540 { 00:09:00.540 "name": "NewBaseBdev", 00:09:00.540 "uuid": "93e598f2-4011-4a64-b59f-9b90a5fe0b10", 00:09:00.540 "is_configured": true, 00:09:00.540 "data_offset": 2048, 00:09:00.540 "data_size": 63488 00:09:00.540 }, 00:09:00.540 { 00:09:00.540 "name": "BaseBdev2", 00:09:00.540 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:09:00.540 "is_configured": true, 00:09:00.540 "data_offset": 2048, 00:09:00.540 "data_size": 63488 00:09:00.540 }, 00:09:00.540 { 00:09:00.540 "name": "BaseBdev3", 00:09:00.540 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:09:00.540 "is_configured": true, 00:09:00.540 "data_offset": 2048, 00:09:00.540 "data_size": 63488 00:09:00.540 } 00:09:00.540 ] 00:09:00.540 }' 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.540 09:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.107 [2024-11-20 09:21:26.382659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.107 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.107 "name": "Existed_Raid", 00:09:01.107 "aliases": [ 00:09:01.107 "f08d8f1d-bce9-4763-9d06-b19664b2138c" 00:09:01.107 ], 00:09:01.107 "product_name": "Raid Volume", 00:09:01.107 "block_size": 512, 00:09:01.107 "num_blocks": 190464, 00:09:01.107 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:09:01.107 "assigned_rate_limits": { 00:09:01.107 "rw_ios_per_sec": 0, 00:09:01.107 "rw_mbytes_per_sec": 0, 00:09:01.107 "r_mbytes_per_sec": 0, 00:09:01.107 "w_mbytes_per_sec": 0 00:09:01.107 }, 00:09:01.107 "claimed": false, 00:09:01.107 "zoned": false, 00:09:01.107 "supported_io_types": { 00:09:01.107 "read": true, 00:09:01.107 "write": true, 00:09:01.107 "unmap": true, 00:09:01.107 "flush": true, 00:09:01.107 "reset": true, 00:09:01.107 "nvme_admin": false, 00:09:01.107 "nvme_io": false, 00:09:01.107 "nvme_io_md": false, 00:09:01.107 "write_zeroes": true, 00:09:01.107 "zcopy": false, 00:09:01.107 "get_zone_info": false, 00:09:01.107 "zone_management": false, 00:09:01.107 "zone_append": false, 00:09:01.107 "compare": false, 00:09:01.107 "compare_and_write": false, 00:09:01.107 "abort": false, 00:09:01.107 "seek_hole": false, 00:09:01.107 "seek_data": false, 00:09:01.107 "copy": false, 00:09:01.107 "nvme_iov_md": false 00:09:01.107 }, 00:09:01.107 "memory_domains": [ 00:09:01.107 { 00:09:01.107 "dma_device_id": "system", 00:09:01.107 "dma_device_type": 1 00:09:01.107 }, 00:09:01.107 { 00:09:01.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.107 "dma_device_type": 2 00:09:01.107 }, 00:09:01.107 { 00:09:01.107 "dma_device_id": "system", 00:09:01.107 "dma_device_type": 1 00:09:01.107 }, 00:09:01.107 { 00:09:01.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.108 "dma_device_type": 2 00:09:01.108 }, 00:09:01.108 { 00:09:01.108 "dma_device_id": "system", 00:09:01.108 "dma_device_type": 1 00:09:01.108 }, 00:09:01.108 { 00:09:01.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.108 "dma_device_type": 2 00:09:01.108 } 00:09:01.108 ], 00:09:01.108 "driver_specific": { 00:09:01.108 "raid": { 00:09:01.108 "uuid": "f08d8f1d-bce9-4763-9d06-b19664b2138c", 00:09:01.108 "strip_size_kb": 64, 00:09:01.108 "state": "online", 00:09:01.108 "raid_level": "raid0", 00:09:01.108 "superblock": true, 00:09:01.108 "num_base_bdevs": 3, 00:09:01.108 "num_base_bdevs_discovered": 3, 00:09:01.108 "num_base_bdevs_operational": 3, 00:09:01.108 "base_bdevs_list": [ 00:09:01.108 { 00:09:01.108 "name": "NewBaseBdev", 00:09:01.108 "uuid": "93e598f2-4011-4a64-b59f-9b90a5fe0b10", 00:09:01.108 "is_configured": true, 00:09:01.108 "data_offset": 2048, 00:09:01.108 "data_size": 63488 00:09:01.108 }, 00:09:01.108 { 00:09:01.108 "name": "BaseBdev2", 00:09:01.108 "uuid": "42dd774e-6c04-4d73-8839-c1528883d2f4", 00:09:01.108 "is_configured": true, 00:09:01.108 "data_offset": 2048, 00:09:01.108 "data_size": 63488 00:09:01.108 }, 00:09:01.108 { 00:09:01.108 "name": "BaseBdev3", 00:09:01.108 "uuid": "3ad5f617-ba2f-4078-91d6-a9a5f4ea0ef2", 00:09:01.108 "is_configured": true, 00:09:01.108 "data_offset": 2048, 00:09:01.108 "data_size": 63488 00:09:01.108 } 00:09:01.108 ] 00:09:01.108 } 00:09:01.108 } 00:09:01.108 }' 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:01.108 BaseBdev2 00:09:01.108 BaseBdev3' 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.108 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.367 [2024-11-20 09:21:26.665845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.367 [2024-11-20 09:21:26.666004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.367 [2024-11-20 09:21:26.666180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.367 [2024-11-20 09:21:26.666290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.367 [2024-11-20 09:21:26.666350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64683 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64683 ']' 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64683 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64683 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64683' 00:09:01.367 killing process with pid 64683 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64683 00:09:01.367 09:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64683 00:09:01.367 [2024-11-20 09:21:26.701497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.935 [2024-11-20 09:21:27.096072] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.316 09:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:03.316 00:09:03.316 real 0m12.028s 00:09:03.316 user 0m18.689s 00:09:03.316 sys 0m2.214s 00:09:03.316 ************************************ 00:09:03.316 END TEST raid_state_function_test_sb 00:09:03.316 ************************************ 00:09:03.316 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.316 09:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.316 09:21:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:03.316 09:21:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:03.316 09:21:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.316 09:21:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.316 ************************************ 00:09:03.316 START TEST raid_superblock_test 00:09:03.316 ************************************ 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65316 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65316 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65316 ']' 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.316 09:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.316 [2024-11-20 09:21:28.707833] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:03.316 [2024-11-20 09:21:28.708081] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65316 ] 00:09:03.575 [2024-11-20 09:21:28.891266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.834 [2024-11-20 09:21:29.052034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.095 [2024-11-20 09:21:29.321712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.095 [2024-11-20 09:21:29.321810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.355 malloc1 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.355 [2024-11-20 09:21:29.674667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:04.355 [2024-11-20 09:21:29.674884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.355 [2024-11-20 09:21:29.674949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:04.355 [2024-11-20 09:21:29.674987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.355 [2024-11-20 09:21:29.677951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.355 [2024-11-20 09:21:29.678072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:04.355 pt1 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.355 malloc2 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.355 [2024-11-20 09:21:29.747384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:04.355 [2024-11-20 09:21:29.747518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.355 [2024-11-20 09:21:29.747556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:04.355 [2024-11-20 09:21:29.747569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.355 [2024-11-20 09:21:29.750624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.355 [2024-11-20 09:21:29.750763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:04.355 pt2 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.355 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.614 malloc3 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.614 [2024-11-20 09:21:29.829043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:04.614 [2024-11-20 09:21:29.829238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.614 [2024-11-20 09:21:29.829290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:04.614 [2024-11-20 09:21:29.829330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.614 [2024-11-20 09:21:29.832290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.614 [2024-11-20 09:21:29.832406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:04.614 pt3 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.614 [2024-11-20 09:21:29.841234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.614 [2024-11-20 09:21:29.843802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:04.614 [2024-11-20 09:21:29.843973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:04.614 [2024-11-20 09:21:29.844217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:04.614 [2024-11-20 09:21:29.844278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.614 [2024-11-20 09:21:29.844694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:04.614 [2024-11-20 09:21:29.844970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:04.614 [2024-11-20 09:21:29.845023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:04.614 [2024-11-20 09:21:29.845377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.614 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.614 "name": "raid_bdev1", 00:09:04.614 "uuid": "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b", 00:09:04.614 "strip_size_kb": 64, 00:09:04.614 "state": "online", 00:09:04.614 "raid_level": "raid0", 00:09:04.614 "superblock": true, 00:09:04.614 "num_base_bdevs": 3, 00:09:04.614 "num_base_bdevs_discovered": 3, 00:09:04.614 "num_base_bdevs_operational": 3, 00:09:04.614 "base_bdevs_list": [ 00:09:04.614 { 00:09:04.614 "name": "pt1", 00:09:04.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.614 "is_configured": true, 00:09:04.614 "data_offset": 2048, 00:09:04.614 "data_size": 63488 00:09:04.614 }, 00:09:04.614 { 00:09:04.614 "name": "pt2", 00:09:04.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.614 "is_configured": true, 00:09:04.614 "data_offset": 2048, 00:09:04.614 "data_size": 63488 00:09:04.614 }, 00:09:04.614 { 00:09:04.614 "name": "pt3", 00:09:04.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.614 "is_configured": true, 00:09:04.614 "data_offset": 2048, 00:09:04.614 "data_size": 63488 00:09:04.614 } 00:09:04.614 ] 00:09:04.615 }' 00:09:04.615 09:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.615 09:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.874 [2024-11-20 09:21:30.269047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.874 "name": "raid_bdev1", 00:09:04.874 "aliases": [ 00:09:04.874 "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b" 00:09:04.874 ], 00:09:04.874 "product_name": "Raid Volume", 00:09:04.874 "block_size": 512, 00:09:04.874 "num_blocks": 190464, 00:09:04.874 "uuid": "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b", 00:09:04.874 "assigned_rate_limits": { 00:09:04.874 "rw_ios_per_sec": 0, 00:09:04.874 "rw_mbytes_per_sec": 0, 00:09:04.874 "r_mbytes_per_sec": 0, 00:09:04.874 "w_mbytes_per_sec": 0 00:09:04.874 }, 00:09:04.874 "claimed": false, 00:09:04.874 "zoned": false, 00:09:04.874 "supported_io_types": { 00:09:04.874 "read": true, 00:09:04.874 "write": true, 00:09:04.874 "unmap": true, 00:09:04.874 "flush": true, 00:09:04.874 "reset": true, 00:09:04.874 "nvme_admin": false, 00:09:04.874 "nvme_io": false, 00:09:04.874 "nvme_io_md": false, 00:09:04.874 "write_zeroes": true, 00:09:04.874 "zcopy": false, 00:09:04.874 "get_zone_info": false, 00:09:04.874 "zone_management": false, 00:09:04.874 "zone_append": false, 00:09:04.874 "compare": false, 00:09:04.874 "compare_and_write": false, 00:09:04.874 "abort": false, 00:09:04.874 "seek_hole": false, 00:09:04.874 "seek_data": false, 00:09:04.874 "copy": false, 00:09:04.874 "nvme_iov_md": false 00:09:04.874 }, 00:09:04.874 "memory_domains": [ 00:09:04.874 { 00:09:04.874 "dma_device_id": "system", 00:09:04.874 "dma_device_type": 1 00:09:04.874 }, 00:09:04.874 { 00:09:04.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.874 "dma_device_type": 2 00:09:04.874 }, 00:09:04.874 { 00:09:04.874 "dma_device_id": "system", 00:09:04.874 "dma_device_type": 1 00:09:04.874 }, 00:09:04.874 { 00:09:04.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.874 "dma_device_type": 2 00:09:04.874 }, 00:09:04.874 { 00:09:04.874 "dma_device_id": "system", 00:09:04.874 "dma_device_type": 1 00:09:04.874 }, 00:09:04.874 { 00:09:04.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.874 "dma_device_type": 2 00:09:04.874 } 00:09:04.874 ], 00:09:04.874 "driver_specific": { 00:09:04.874 "raid": { 00:09:04.874 "uuid": "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b", 00:09:04.874 "strip_size_kb": 64, 00:09:04.874 "state": "online", 00:09:04.874 "raid_level": "raid0", 00:09:04.874 "superblock": true, 00:09:04.874 "num_base_bdevs": 3, 00:09:04.874 "num_base_bdevs_discovered": 3, 00:09:04.874 "num_base_bdevs_operational": 3, 00:09:04.874 "base_bdevs_list": [ 00:09:04.874 { 00:09:04.874 "name": "pt1", 00:09:04.874 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.874 "is_configured": true, 00:09:04.874 "data_offset": 2048, 00:09:04.874 "data_size": 63488 00:09:04.874 }, 00:09:04.874 { 00:09:04.874 "name": "pt2", 00:09:04.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.874 "is_configured": true, 00:09:04.874 "data_offset": 2048, 00:09:04.874 "data_size": 63488 00:09:04.874 }, 00:09:04.874 { 00:09:04.874 "name": "pt3", 00:09:04.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.874 "is_configured": true, 00:09:04.874 "data_offset": 2048, 00:09:04.874 "data_size": 63488 00:09:04.874 } 00:09:04.874 ] 00:09:04.874 } 00:09:04.874 } 00:09:04.874 }' 00:09:04.874 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:05.134 pt2 00:09:05.134 pt3' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.134 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:05.134 [2024-11-20 09:21:30.580539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2bc246b8-ef67-452d-bac0-5e7f1dab6a0b 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2bc246b8-ef67-452d-bac0-5e7f1dab6a0b ']' 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.397 [2024-11-20 09:21:30.628159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.397 [2024-11-20 09:21:30.628319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.397 [2024-11-20 09:21:30.628492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.397 [2024-11-20 09:21:30.628580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.397 [2024-11-20 09:21:30.628593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.397 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.398 [2024-11-20 09:21:30.791965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:05.398 [2024-11-20 09:21:30.794778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:05.398 [2024-11-20 09:21:30.794928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:05.398 [2024-11-20 09:21:30.795029] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:05.398 [2024-11-20 09:21:30.795158] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:05.398 [2024-11-20 09:21:30.795255] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:05.398 [2024-11-20 09:21:30.795321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.398 [2024-11-20 09:21:30.795366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:05.398 request: 00:09:05.398 { 00:09:05.398 "name": "raid_bdev1", 00:09:05.398 "raid_level": "raid0", 00:09:05.398 "base_bdevs": [ 00:09:05.398 "malloc1", 00:09:05.398 "malloc2", 00:09:05.398 "malloc3" 00:09:05.398 ], 00:09:05.398 "strip_size_kb": 64, 00:09:05.398 "superblock": false, 00:09:05.398 "method": "bdev_raid_create", 00:09:05.398 "req_id": 1 00:09:05.398 } 00:09:05.398 Got JSON-RPC error response 00:09:05.398 response: 00:09:05.398 { 00:09:05.398 "code": -17, 00:09:05.398 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:05.398 } 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:05.398 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.671 [2024-11-20 09:21:30.863976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:05.671 [2024-11-20 09:21:30.864191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.671 [2024-11-20 09:21:30.864224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:05.671 [2024-11-20 09:21:30.864236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.671 [2024-11-20 09:21:30.867328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.671 [2024-11-20 09:21:30.867454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:05.671 [2024-11-20 09:21:30.867629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:05.671 [2024-11-20 09:21:30.867705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:05.671 pt1 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.671 "name": "raid_bdev1", 00:09:05.671 "uuid": "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b", 00:09:05.671 "strip_size_kb": 64, 00:09:05.671 "state": "configuring", 00:09:05.671 "raid_level": "raid0", 00:09:05.671 "superblock": true, 00:09:05.671 "num_base_bdevs": 3, 00:09:05.671 "num_base_bdevs_discovered": 1, 00:09:05.671 "num_base_bdevs_operational": 3, 00:09:05.671 "base_bdevs_list": [ 00:09:05.671 { 00:09:05.671 "name": "pt1", 00:09:05.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.671 "is_configured": true, 00:09:05.671 "data_offset": 2048, 00:09:05.671 "data_size": 63488 00:09:05.671 }, 00:09:05.671 { 00:09:05.671 "name": null, 00:09:05.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.671 "is_configured": false, 00:09:05.671 "data_offset": 2048, 00:09:05.671 "data_size": 63488 00:09:05.671 }, 00:09:05.671 { 00:09:05.671 "name": null, 00:09:05.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.671 "is_configured": false, 00:09:05.671 "data_offset": 2048, 00:09:05.671 "data_size": 63488 00:09:05.671 } 00:09:05.671 ] 00:09:05.671 }' 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.671 09:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.930 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:05.930 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.930 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.930 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.930 [2024-11-20 09:21:31.291351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.930 [2024-11-20 09:21:31.291576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.930 [2024-11-20 09:21:31.291623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:05.930 [2024-11-20 09:21:31.291671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.930 [2024-11-20 09:21:31.292326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.930 [2024-11-20 09:21:31.292404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.930 [2024-11-20 09:21:31.292584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.930 [2024-11-20 09:21:31.292650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.930 pt2 00:09:05.930 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.931 [2024-11-20 09:21:31.299354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.931 "name": "raid_bdev1", 00:09:05.931 "uuid": "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b", 00:09:05.931 "strip_size_kb": 64, 00:09:05.931 "state": "configuring", 00:09:05.931 "raid_level": "raid0", 00:09:05.931 "superblock": true, 00:09:05.931 "num_base_bdevs": 3, 00:09:05.931 "num_base_bdevs_discovered": 1, 00:09:05.931 "num_base_bdevs_operational": 3, 00:09:05.931 "base_bdevs_list": [ 00:09:05.931 { 00:09:05.931 "name": "pt1", 00:09:05.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.931 "is_configured": true, 00:09:05.931 "data_offset": 2048, 00:09:05.931 "data_size": 63488 00:09:05.931 }, 00:09:05.931 { 00:09:05.931 "name": null, 00:09:05.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.931 "is_configured": false, 00:09:05.931 "data_offset": 0, 00:09:05.931 "data_size": 63488 00:09:05.931 }, 00:09:05.931 { 00:09:05.931 "name": null, 00:09:05.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.931 "is_configured": false, 00:09:05.931 "data_offset": 2048, 00:09:05.931 "data_size": 63488 00:09:05.931 } 00:09:05.931 ] 00:09:05.931 }' 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.931 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.499 [2024-11-20 09:21:31.806585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:06.499 [2024-11-20 09:21:31.806713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.499 [2024-11-20 09:21:31.806740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:06.499 [2024-11-20 09:21:31.806755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.499 [2024-11-20 09:21:31.807417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.499 [2024-11-20 09:21:31.807464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:06.499 [2024-11-20 09:21:31.807601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:06.499 [2024-11-20 09:21:31.807638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:06.499 pt2 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.499 [2024-11-20 09:21:31.818568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:06.499 [2024-11-20 09:21:31.818765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.499 [2024-11-20 09:21:31.818792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:06.499 [2024-11-20 09:21:31.818806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.499 [2024-11-20 09:21:31.819419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.499 [2024-11-20 09:21:31.819468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:06.499 [2024-11-20 09:21:31.819579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:06.499 [2024-11-20 09:21:31.819611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:06.499 [2024-11-20 09:21:31.819785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:06.499 [2024-11-20 09:21:31.819800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:06.499 [2024-11-20 09:21:31.820168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:06.499 [2024-11-20 09:21:31.820379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:06.499 [2024-11-20 09:21:31.820390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:06.499 [2024-11-20 09:21:31.820602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.499 pt3 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.499 "name": "raid_bdev1", 00:09:06.499 "uuid": "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b", 00:09:06.499 "strip_size_kb": 64, 00:09:06.499 "state": "online", 00:09:06.499 "raid_level": "raid0", 00:09:06.499 "superblock": true, 00:09:06.499 "num_base_bdevs": 3, 00:09:06.499 "num_base_bdevs_discovered": 3, 00:09:06.499 "num_base_bdevs_operational": 3, 00:09:06.499 "base_bdevs_list": [ 00:09:06.499 { 00:09:06.499 "name": "pt1", 00:09:06.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.499 "is_configured": true, 00:09:06.499 "data_offset": 2048, 00:09:06.499 "data_size": 63488 00:09:06.499 }, 00:09:06.499 { 00:09:06.499 "name": "pt2", 00:09:06.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.499 "is_configured": true, 00:09:06.499 "data_offset": 2048, 00:09:06.499 "data_size": 63488 00:09:06.499 }, 00:09:06.499 { 00:09:06.499 "name": "pt3", 00:09:06.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.499 "is_configured": true, 00:09:06.499 "data_offset": 2048, 00:09:06.499 "data_size": 63488 00:09:06.499 } 00:09:06.499 ] 00:09:06.499 }' 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.499 09:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.067 [2024-11-20 09:21:32.314164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.067 "name": "raid_bdev1", 00:09:07.067 "aliases": [ 00:09:07.067 "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b" 00:09:07.067 ], 00:09:07.067 "product_name": "Raid Volume", 00:09:07.067 "block_size": 512, 00:09:07.067 "num_blocks": 190464, 00:09:07.067 "uuid": "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b", 00:09:07.067 "assigned_rate_limits": { 00:09:07.067 "rw_ios_per_sec": 0, 00:09:07.067 "rw_mbytes_per_sec": 0, 00:09:07.067 "r_mbytes_per_sec": 0, 00:09:07.067 "w_mbytes_per_sec": 0 00:09:07.067 }, 00:09:07.067 "claimed": false, 00:09:07.067 "zoned": false, 00:09:07.067 "supported_io_types": { 00:09:07.067 "read": true, 00:09:07.067 "write": true, 00:09:07.067 "unmap": true, 00:09:07.067 "flush": true, 00:09:07.067 "reset": true, 00:09:07.067 "nvme_admin": false, 00:09:07.067 "nvme_io": false, 00:09:07.067 "nvme_io_md": false, 00:09:07.067 "write_zeroes": true, 00:09:07.067 "zcopy": false, 00:09:07.067 "get_zone_info": false, 00:09:07.067 "zone_management": false, 00:09:07.067 "zone_append": false, 00:09:07.067 "compare": false, 00:09:07.067 "compare_and_write": false, 00:09:07.067 "abort": false, 00:09:07.067 "seek_hole": false, 00:09:07.067 "seek_data": false, 00:09:07.067 "copy": false, 00:09:07.067 "nvme_iov_md": false 00:09:07.067 }, 00:09:07.067 "memory_domains": [ 00:09:07.067 { 00:09:07.067 "dma_device_id": "system", 00:09:07.067 "dma_device_type": 1 00:09:07.067 }, 00:09:07.067 { 00:09:07.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.067 "dma_device_type": 2 00:09:07.067 }, 00:09:07.067 { 00:09:07.067 "dma_device_id": "system", 00:09:07.067 "dma_device_type": 1 00:09:07.067 }, 00:09:07.067 { 00:09:07.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.067 "dma_device_type": 2 00:09:07.067 }, 00:09:07.067 { 00:09:07.067 "dma_device_id": "system", 00:09:07.067 "dma_device_type": 1 00:09:07.067 }, 00:09:07.067 { 00:09:07.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.067 "dma_device_type": 2 00:09:07.067 } 00:09:07.067 ], 00:09:07.067 "driver_specific": { 00:09:07.067 "raid": { 00:09:07.067 "uuid": "2bc246b8-ef67-452d-bac0-5e7f1dab6a0b", 00:09:07.067 "strip_size_kb": 64, 00:09:07.067 "state": "online", 00:09:07.067 "raid_level": "raid0", 00:09:07.067 "superblock": true, 00:09:07.067 "num_base_bdevs": 3, 00:09:07.067 "num_base_bdevs_discovered": 3, 00:09:07.067 "num_base_bdevs_operational": 3, 00:09:07.067 "base_bdevs_list": [ 00:09:07.067 { 00:09:07.067 "name": "pt1", 00:09:07.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.067 "is_configured": true, 00:09:07.067 "data_offset": 2048, 00:09:07.067 "data_size": 63488 00:09:07.067 }, 00:09:07.067 { 00:09:07.067 "name": "pt2", 00:09:07.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.067 "is_configured": true, 00:09:07.067 "data_offset": 2048, 00:09:07.067 "data_size": 63488 00:09:07.067 }, 00:09:07.067 { 00:09:07.067 "name": "pt3", 00:09:07.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.067 "is_configured": true, 00:09:07.067 "data_offset": 2048, 00:09:07.067 "data_size": 63488 00:09:07.067 } 00:09:07.067 ] 00:09:07.067 } 00:09:07.067 } 00:09:07.067 }' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:07.067 pt2 00:09:07.067 pt3' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.067 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.326 [2024-11-20 09:21:32.537756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2bc246b8-ef67-452d-bac0-5e7f1dab6a0b '!=' 2bc246b8-ef67-452d-bac0-5e7f1dab6a0b ']' 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65316 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65316 ']' 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65316 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65316 00:09:07.326 killing process with pid 65316 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65316' 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65316 00:09:07.326 [2024-11-20 09:21:32.611551] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.326 09:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65316 00:09:07.326 [2024-11-20 09:21:32.611744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.326 [2024-11-20 09:21:32.611826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.326 [2024-11-20 09:21:32.611843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:07.585 [2024-11-20 09:21:33.009974] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.491 09:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:09.491 00:09:09.491 real 0m5.859s 00:09:09.491 user 0m8.126s 00:09:09.491 sys 0m1.070s 00:09:09.491 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.491 09:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.491 ************************************ 00:09:09.491 END TEST raid_superblock_test 00:09:09.491 ************************************ 00:09:09.491 09:21:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:09.491 09:21:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:09.491 09:21:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.491 09:21:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.491 ************************************ 00:09:09.491 START TEST raid_read_error_test 00:09:09.491 ************************************ 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:09.491 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.72sBRMKtDt 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65575 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65575 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65575 ']' 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.492 09:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.492 [2024-11-20 09:21:34.658423] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:09.492 [2024-11-20 09:21:34.658655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65575 ] 00:09:09.492 [2024-11-20 09:21:34.836814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.768 [2024-11-20 09:21:35.017282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.026 [2024-11-20 09:21:35.288280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.026 [2024-11-20 09:21:35.288500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.285 BaseBdev1_malloc 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.285 true 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.285 [2024-11-20 09:21:35.670251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:10.285 [2024-11-20 09:21:35.670364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.285 [2024-11-20 09:21:35.670398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:10.285 [2024-11-20 09:21:35.670414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.285 [2024-11-20 09:21:35.673601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.285 [2024-11-20 09:21:35.673665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.285 BaseBdev1 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.285 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 BaseBdev2_malloc 00:09:10.544 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.544 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:10.544 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.544 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 true 00:09:10.544 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.544 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:10.544 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.544 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 [2024-11-20 09:21:35.757617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:10.544 [2024-11-20 09:21:35.757730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.545 [2024-11-20 09:21:35.757759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:10.545 [2024-11-20 09:21:35.757773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.545 [2024-11-20 09:21:35.760940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.545 [2024-11-20 09:21:35.761113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:10.545 BaseBdev2 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.545 BaseBdev3_malloc 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.545 true 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.545 [2024-11-20 09:21:35.854756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:10.545 [2024-11-20 09:21:35.854975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.545 [2024-11-20 09:21:35.855042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:10.545 [2024-11-20 09:21:35.855085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.545 [2024-11-20 09:21:35.858274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.545 [2024-11-20 09:21:35.858425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:10.545 BaseBdev3 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.545 [2024-11-20 09:21:35.867002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.545 [2024-11-20 09:21:35.869775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.545 [2024-11-20 09:21:35.869908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.545 [2024-11-20 09:21:35.870202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:10.545 [2024-11-20 09:21:35.870221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.545 [2024-11-20 09:21:35.870635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:10.545 [2024-11-20 09:21:35.870864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:10.545 [2024-11-20 09:21:35.870882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:10.545 [2024-11-20 09:21:35.871214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.545 "name": "raid_bdev1", 00:09:10.545 "uuid": "9ded6ddc-f74f-4d04-9787-5ad7218227e7", 00:09:10.545 "strip_size_kb": 64, 00:09:10.545 "state": "online", 00:09:10.545 "raid_level": "raid0", 00:09:10.545 "superblock": true, 00:09:10.545 "num_base_bdevs": 3, 00:09:10.545 "num_base_bdevs_discovered": 3, 00:09:10.545 "num_base_bdevs_operational": 3, 00:09:10.545 "base_bdevs_list": [ 00:09:10.545 { 00:09:10.545 "name": "BaseBdev1", 00:09:10.545 "uuid": "2d8e348a-151a-55f8-a14c-7083ac1b30a9", 00:09:10.545 "is_configured": true, 00:09:10.545 "data_offset": 2048, 00:09:10.545 "data_size": 63488 00:09:10.545 }, 00:09:10.545 { 00:09:10.545 "name": "BaseBdev2", 00:09:10.545 "uuid": "465b4a19-6d8c-55dc-bf85-b4f4ddb9e37c", 00:09:10.545 "is_configured": true, 00:09:10.545 "data_offset": 2048, 00:09:10.545 "data_size": 63488 00:09:10.545 }, 00:09:10.545 { 00:09:10.545 "name": "BaseBdev3", 00:09:10.545 "uuid": "b2e1b497-c21d-5858-867e-518d99c8ec53", 00:09:10.545 "is_configured": true, 00:09:10.545 "data_offset": 2048, 00:09:10.545 "data_size": 63488 00:09:10.545 } 00:09:10.545 ] 00:09:10.545 }' 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.545 09:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.113 09:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:11.113 09:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:11.113 [2024-11-20 09:21:36.419982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.049 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.049 "name": "raid_bdev1", 00:09:12.049 "uuid": "9ded6ddc-f74f-4d04-9787-5ad7218227e7", 00:09:12.049 "strip_size_kb": 64, 00:09:12.049 "state": "online", 00:09:12.049 "raid_level": "raid0", 00:09:12.049 "superblock": true, 00:09:12.049 "num_base_bdevs": 3, 00:09:12.049 "num_base_bdevs_discovered": 3, 00:09:12.049 "num_base_bdevs_operational": 3, 00:09:12.049 "base_bdevs_list": [ 00:09:12.049 { 00:09:12.050 "name": "BaseBdev1", 00:09:12.050 "uuid": "2d8e348a-151a-55f8-a14c-7083ac1b30a9", 00:09:12.050 "is_configured": true, 00:09:12.050 "data_offset": 2048, 00:09:12.050 "data_size": 63488 00:09:12.050 }, 00:09:12.050 { 00:09:12.050 "name": "BaseBdev2", 00:09:12.050 "uuid": "465b4a19-6d8c-55dc-bf85-b4f4ddb9e37c", 00:09:12.050 "is_configured": true, 00:09:12.050 "data_offset": 2048, 00:09:12.050 "data_size": 63488 00:09:12.050 }, 00:09:12.050 { 00:09:12.050 "name": "BaseBdev3", 00:09:12.050 "uuid": "b2e1b497-c21d-5858-867e-518d99c8ec53", 00:09:12.050 "is_configured": true, 00:09:12.050 "data_offset": 2048, 00:09:12.050 "data_size": 63488 00:09:12.050 } 00:09:12.050 ] 00:09:12.050 }' 00:09:12.050 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.050 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.617 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.617 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.617 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.617 [2024-11-20 09:21:37.811025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.617 [2024-11-20 09:21:37.811084] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.617 { 00:09:12.617 "results": [ 00:09:12.617 { 00:09:12.617 "job": "raid_bdev1", 00:09:12.617 "core_mask": "0x1", 00:09:12.617 "workload": "randrw", 00:09:12.617 "percentage": 50, 00:09:12.617 "status": "finished", 00:09:12.617 "queue_depth": 1, 00:09:12.617 "io_size": 131072, 00:09:12.617 "runtime": 1.390923, 00:09:12.617 "iops": 11021.458412866852, 00:09:12.617 "mibps": 1377.6823016083565, 00:09:12.618 "io_failed": 1, 00:09:12.618 "io_timeout": 0, 00:09:12.618 "avg_latency_us": 127.94208862427043, 00:09:12.618 "min_latency_us": 32.866375545851525, 00:09:12.618 "max_latency_us": 1781.4917030567685 00:09:12.618 } 00:09:12.618 ], 00:09:12.618 "core_count": 1 00:09:12.618 } 00:09:12.618 [2024-11-20 09:21:37.814453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.618 [2024-11-20 09:21:37.814534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.618 [2024-11-20 09:21:37.814585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.618 [2024-11-20 09:21:37.814598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65575 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65575 ']' 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65575 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65575 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65575' 00:09:12.618 killing process with pid 65575 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65575 00:09:12.618 [2024-11-20 09:21:37.857993] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.618 09:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65575 00:09:12.877 [2024-11-20 09:21:38.167139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.250 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.72sBRMKtDt 00:09:14.250 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:14.251 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:14.508 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:14.508 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:14.508 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:14.508 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:14.508 09:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:14.508 00:09:14.508 real 0m5.177s 00:09:14.508 user 0m6.029s 00:09:14.508 sys 0m0.715s 00:09:14.508 09:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.508 09:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.508 ************************************ 00:09:14.508 END TEST raid_read_error_test 00:09:14.508 ************************************ 00:09:14.508 09:21:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:14.508 09:21:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:14.508 09:21:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.508 09:21:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.508 ************************************ 00:09:14.508 START TEST raid_write_error_test 00:09:14.508 ************************************ 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jBjhYaYnx1 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65726 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65726 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65726 ']' 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.508 09:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.508 [2024-11-20 09:21:39.922395] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:14.508 [2024-11-20 09:21:39.922662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65726 ] 00:09:14.768 [2024-11-20 09:21:40.104134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.025 [2024-11-20 09:21:40.272418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.283 [2024-11-20 09:21:40.556307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.283 [2024-11-20 09:21:40.556554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.540 BaseBdev1_malloc 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.540 true 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.540 [2024-11-20 09:21:40.931117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:15.540 [2024-11-20 09:21:40.931235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.540 [2024-11-20 09:21:40.931268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:15.540 [2024-11-20 09:21:40.931284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.540 [2024-11-20 09:21:40.934469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.540 [2024-11-20 09:21:40.934538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:15.540 BaseBdev1 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.540 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 BaseBdev2_malloc 00:09:15.798 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.798 09:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:15.798 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.798 09:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 true 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 [2024-11-20 09:21:41.014063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:15.798 [2024-11-20 09:21:41.014180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.798 [2024-11-20 09:21:41.014211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:15.798 [2024-11-20 09:21:41.014225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.798 [2024-11-20 09:21:41.017334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.798 [2024-11-20 09:21:41.017396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:15.798 BaseBdev2 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 BaseBdev3_malloc 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 true 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 [2024-11-20 09:21:41.110543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:15.798 [2024-11-20 09:21:41.110769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.798 [2024-11-20 09:21:41.110804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:15.798 [2024-11-20 09:21:41.110820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.798 [2024-11-20 09:21:41.113949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.798 [2024-11-20 09:21:41.114089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:15.798 BaseBdev3 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.798 [2024-11-20 09:21:41.122634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.798 [2024-11-20 09:21:41.125250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.798 [2024-11-20 09:21:41.125473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.798 [2024-11-20 09:21:41.125750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.798 [2024-11-20 09:21:41.125768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.798 [2024-11-20 09:21:41.126146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:15.798 [2024-11-20 09:21:41.126352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.798 [2024-11-20 09:21:41.126369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:15.798 [2024-11-20 09:21:41.126686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.798 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.799 "name": "raid_bdev1", 00:09:15.799 "uuid": "0120bb5c-573e-48cc-9f7b-fa0113ddb9f7", 00:09:15.799 "strip_size_kb": 64, 00:09:15.799 "state": "online", 00:09:15.799 "raid_level": "raid0", 00:09:15.799 "superblock": true, 00:09:15.799 "num_base_bdevs": 3, 00:09:15.799 "num_base_bdevs_discovered": 3, 00:09:15.799 "num_base_bdevs_operational": 3, 00:09:15.799 "base_bdevs_list": [ 00:09:15.799 { 00:09:15.799 "name": "BaseBdev1", 00:09:15.799 "uuid": "ee2905a2-7298-5c40-b87a-d58eb4fd5ea2", 00:09:15.799 "is_configured": true, 00:09:15.799 "data_offset": 2048, 00:09:15.799 "data_size": 63488 00:09:15.799 }, 00:09:15.799 { 00:09:15.799 "name": "BaseBdev2", 00:09:15.799 "uuid": "088f0f7d-4911-57f5-8389-9e16b8705c4e", 00:09:15.799 "is_configured": true, 00:09:15.799 "data_offset": 2048, 00:09:15.799 "data_size": 63488 00:09:15.799 }, 00:09:15.799 { 00:09:15.799 "name": "BaseBdev3", 00:09:15.799 "uuid": "078990c7-5ed5-52e1-9d79-f1bc4b55befa", 00:09:15.799 "is_configured": true, 00:09:15.799 "data_offset": 2048, 00:09:15.799 "data_size": 63488 00:09:15.799 } 00:09:15.799 ] 00:09:15.799 }' 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.799 09:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.365 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:16.365 09:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:16.365 [2024-11-20 09:21:41.759367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.299 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.300 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.300 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.300 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.300 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.300 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.300 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.300 "name": "raid_bdev1", 00:09:17.300 "uuid": "0120bb5c-573e-48cc-9f7b-fa0113ddb9f7", 00:09:17.300 "strip_size_kb": 64, 00:09:17.300 "state": "online", 00:09:17.300 "raid_level": "raid0", 00:09:17.300 "superblock": true, 00:09:17.300 "num_base_bdevs": 3, 00:09:17.300 "num_base_bdevs_discovered": 3, 00:09:17.300 "num_base_bdevs_operational": 3, 00:09:17.300 "base_bdevs_list": [ 00:09:17.300 { 00:09:17.300 "name": "BaseBdev1", 00:09:17.300 "uuid": "ee2905a2-7298-5c40-b87a-d58eb4fd5ea2", 00:09:17.300 "is_configured": true, 00:09:17.300 "data_offset": 2048, 00:09:17.300 "data_size": 63488 00:09:17.300 }, 00:09:17.300 { 00:09:17.300 "name": "BaseBdev2", 00:09:17.300 "uuid": "088f0f7d-4911-57f5-8389-9e16b8705c4e", 00:09:17.300 "is_configured": true, 00:09:17.300 "data_offset": 2048, 00:09:17.300 "data_size": 63488 00:09:17.300 }, 00:09:17.300 { 00:09:17.300 "name": "BaseBdev3", 00:09:17.300 "uuid": "078990c7-5ed5-52e1-9d79-f1bc4b55befa", 00:09:17.300 "is_configured": true, 00:09:17.300 "data_offset": 2048, 00:09:17.300 "data_size": 63488 00:09:17.300 } 00:09:17.300 ] 00:09:17.300 }' 00:09:17.300 09:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.300 09:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.867 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.867 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.867 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.867 [2024-11-20 09:21:43.053754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.867 [2024-11-20 09:21:43.053812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.868 [2024-11-20 09:21:43.057180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.868 [2024-11-20 09:21:43.057345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.868 [2024-11-20 09:21:43.057406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.868 [2024-11-20 09:21:43.057419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:17.868 { 00:09:17.868 "results": [ 00:09:17.868 { 00:09:17.868 "job": "raid_bdev1", 00:09:17.868 "core_mask": "0x1", 00:09:17.868 "workload": "randrw", 00:09:17.868 "percentage": 50, 00:09:17.868 "status": "finished", 00:09:17.868 "queue_depth": 1, 00:09:17.868 "io_size": 131072, 00:09:17.868 "runtime": 1.294226, 00:09:17.868 "iops": 11110.11523489715, 00:09:17.868 "mibps": 1388.7644043621438, 00:09:17.868 "io_failed": 1, 00:09:17.868 "io_timeout": 0, 00:09:17.868 "avg_latency_us": 126.79388427643926, 00:09:17.868 "min_latency_us": 32.19563318777293, 00:09:17.868 "max_latency_us": 1781.4917030567685 00:09:17.868 } 00:09:17.868 ], 00:09:17.868 "core_count": 1 00:09:17.868 } 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65726 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65726 ']' 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65726 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65726 00:09:17.868 killing process with pid 65726 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65726' 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65726 00:09:17.868 [2024-11-20 09:21:43.091480] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.868 09:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65726 00:09:18.127 [2024-11-20 09:21:43.401643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jBjhYaYnx1 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:19.502 ************************************ 00:09:19.502 END TEST raid_write_error_test 00:09:19.502 ************************************ 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:09:19.502 00:09:19.502 real 0m5.128s 00:09:19.502 user 0m5.995s 00:09:19.502 sys 0m0.740s 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.502 09:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.761 09:21:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:19.761 09:21:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:19.761 09:21:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:19.761 09:21:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.761 09:21:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.761 ************************************ 00:09:19.761 START TEST raid_state_function_test 00:09:19.761 ************************************ 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:19.761 09:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65876 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:19.761 Process raid pid: 65876 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65876' 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65876 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65876 ']' 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.761 09:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.761 [2024-11-20 09:21:45.103575] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:19.761 [2024-11-20 09:21:45.103836] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.020 [2024-11-20 09:21:45.284442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.020 [2024-11-20 09:21:45.451213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.588 [2024-11-20 09:21:45.744284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.588 [2024-11-20 09:21:45.744363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.588 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.588 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:20.588 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.588 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.588 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.588 [2024-11-20 09:21:46.027004] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.588 [2024-11-20 09:21:46.027105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.588 [2024-11-20 09:21:46.027118] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.589 [2024-11-20 09:21:46.027132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.589 [2024-11-20 09:21:46.027140] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.589 [2024-11-20 09:21:46.027152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.589 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.847 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.847 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.847 "name": "Existed_Raid", 00:09:20.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.847 "strip_size_kb": 64, 00:09:20.847 "state": "configuring", 00:09:20.847 "raid_level": "concat", 00:09:20.847 "superblock": false, 00:09:20.847 "num_base_bdevs": 3, 00:09:20.847 "num_base_bdevs_discovered": 0, 00:09:20.847 "num_base_bdevs_operational": 3, 00:09:20.847 "base_bdevs_list": [ 00:09:20.847 { 00:09:20.847 "name": "BaseBdev1", 00:09:20.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.847 "is_configured": false, 00:09:20.847 "data_offset": 0, 00:09:20.848 "data_size": 0 00:09:20.848 }, 00:09:20.848 { 00:09:20.848 "name": "BaseBdev2", 00:09:20.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.848 "is_configured": false, 00:09:20.848 "data_offset": 0, 00:09:20.848 "data_size": 0 00:09:20.848 }, 00:09:20.848 { 00:09:20.848 "name": "BaseBdev3", 00:09:20.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.848 "is_configured": false, 00:09:20.848 "data_offset": 0, 00:09:20.848 "data_size": 0 00:09:20.848 } 00:09:20.848 ] 00:09:20.848 }' 00:09:20.848 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.848 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.105 [2024-11-20 09:21:46.538143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.105 [2024-11-20 09:21:46.538214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.105 [2024-11-20 09:21:46.550120] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.105 [2024-11-20 09:21:46.550211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.105 [2024-11-20 09:21:46.550222] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.105 [2024-11-20 09:21:46.550234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.105 [2024-11-20 09:21:46.550242] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.105 [2024-11-20 09:21:46.550253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.105 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.365 [2024-11-20 09:21:46.614060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.365 BaseBdev1 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.365 [ 00:09:21.365 { 00:09:21.365 "name": "BaseBdev1", 00:09:21.365 "aliases": [ 00:09:21.365 "c9a09e4c-2960-4978-8804-9f09de875dc6" 00:09:21.365 ], 00:09:21.365 "product_name": "Malloc disk", 00:09:21.365 "block_size": 512, 00:09:21.365 "num_blocks": 65536, 00:09:21.365 "uuid": "c9a09e4c-2960-4978-8804-9f09de875dc6", 00:09:21.365 "assigned_rate_limits": { 00:09:21.365 "rw_ios_per_sec": 0, 00:09:21.365 "rw_mbytes_per_sec": 0, 00:09:21.365 "r_mbytes_per_sec": 0, 00:09:21.365 "w_mbytes_per_sec": 0 00:09:21.365 }, 00:09:21.365 "claimed": true, 00:09:21.365 "claim_type": "exclusive_write", 00:09:21.365 "zoned": false, 00:09:21.365 "supported_io_types": { 00:09:21.365 "read": true, 00:09:21.365 "write": true, 00:09:21.365 "unmap": true, 00:09:21.365 "flush": true, 00:09:21.365 "reset": true, 00:09:21.365 "nvme_admin": false, 00:09:21.365 "nvme_io": false, 00:09:21.365 "nvme_io_md": false, 00:09:21.365 "write_zeroes": true, 00:09:21.365 "zcopy": true, 00:09:21.365 "get_zone_info": false, 00:09:21.365 "zone_management": false, 00:09:21.365 "zone_append": false, 00:09:21.365 "compare": false, 00:09:21.365 "compare_and_write": false, 00:09:21.365 "abort": true, 00:09:21.365 "seek_hole": false, 00:09:21.365 "seek_data": false, 00:09:21.365 "copy": true, 00:09:21.365 "nvme_iov_md": false 00:09:21.365 }, 00:09:21.365 "memory_domains": [ 00:09:21.365 { 00:09:21.365 "dma_device_id": "system", 00:09:21.365 "dma_device_type": 1 00:09:21.365 }, 00:09:21.365 { 00:09:21.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.365 "dma_device_type": 2 00:09:21.365 } 00:09:21.365 ], 00:09:21.365 "driver_specific": {} 00:09:21.365 } 00:09:21.365 ] 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.365 "name": "Existed_Raid", 00:09:21.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.365 "strip_size_kb": 64, 00:09:21.365 "state": "configuring", 00:09:21.365 "raid_level": "concat", 00:09:21.365 "superblock": false, 00:09:21.365 "num_base_bdevs": 3, 00:09:21.365 "num_base_bdevs_discovered": 1, 00:09:21.365 "num_base_bdevs_operational": 3, 00:09:21.365 "base_bdevs_list": [ 00:09:21.365 { 00:09:21.365 "name": "BaseBdev1", 00:09:21.365 "uuid": "c9a09e4c-2960-4978-8804-9f09de875dc6", 00:09:21.365 "is_configured": true, 00:09:21.365 "data_offset": 0, 00:09:21.365 "data_size": 65536 00:09:21.365 }, 00:09:21.365 { 00:09:21.365 "name": "BaseBdev2", 00:09:21.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.365 "is_configured": false, 00:09:21.365 "data_offset": 0, 00:09:21.365 "data_size": 0 00:09:21.365 }, 00:09:21.365 { 00:09:21.365 "name": "BaseBdev3", 00:09:21.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.365 "is_configured": false, 00:09:21.365 "data_offset": 0, 00:09:21.365 "data_size": 0 00:09:21.365 } 00:09:21.365 ] 00:09:21.365 }' 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.365 09:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.932 [2024-11-20 09:21:47.149339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.932 [2024-11-20 09:21:47.149587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.932 [2024-11-20 09:21:47.161420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.932 [2024-11-20 09:21:47.163978] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.932 [2024-11-20 09:21:47.164043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.932 [2024-11-20 09:21:47.164057] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.932 [2024-11-20 09:21:47.164068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.932 "name": "Existed_Raid", 00:09:21.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.932 "strip_size_kb": 64, 00:09:21.932 "state": "configuring", 00:09:21.932 "raid_level": "concat", 00:09:21.932 "superblock": false, 00:09:21.932 "num_base_bdevs": 3, 00:09:21.932 "num_base_bdevs_discovered": 1, 00:09:21.932 "num_base_bdevs_operational": 3, 00:09:21.932 "base_bdevs_list": [ 00:09:21.932 { 00:09:21.932 "name": "BaseBdev1", 00:09:21.932 "uuid": "c9a09e4c-2960-4978-8804-9f09de875dc6", 00:09:21.932 "is_configured": true, 00:09:21.932 "data_offset": 0, 00:09:21.932 "data_size": 65536 00:09:21.932 }, 00:09:21.932 { 00:09:21.932 "name": "BaseBdev2", 00:09:21.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.932 "is_configured": false, 00:09:21.932 "data_offset": 0, 00:09:21.932 "data_size": 0 00:09:21.932 }, 00:09:21.932 { 00:09:21.932 "name": "BaseBdev3", 00:09:21.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.932 "is_configured": false, 00:09:21.932 "data_offset": 0, 00:09:21.932 "data_size": 0 00:09:21.932 } 00:09:21.932 ] 00:09:21.932 }' 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.932 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.500 [2024-11-20 09:21:47.701005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.500 BaseBdev2 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.500 [ 00:09:22.500 { 00:09:22.500 "name": "BaseBdev2", 00:09:22.500 "aliases": [ 00:09:22.500 "0c581866-f119-41bd-8b29-652133e96cbb" 00:09:22.500 ], 00:09:22.500 "product_name": "Malloc disk", 00:09:22.500 "block_size": 512, 00:09:22.500 "num_blocks": 65536, 00:09:22.500 "uuid": "0c581866-f119-41bd-8b29-652133e96cbb", 00:09:22.500 "assigned_rate_limits": { 00:09:22.500 "rw_ios_per_sec": 0, 00:09:22.500 "rw_mbytes_per_sec": 0, 00:09:22.500 "r_mbytes_per_sec": 0, 00:09:22.500 "w_mbytes_per_sec": 0 00:09:22.500 }, 00:09:22.500 "claimed": true, 00:09:22.500 "claim_type": "exclusive_write", 00:09:22.500 "zoned": false, 00:09:22.500 "supported_io_types": { 00:09:22.500 "read": true, 00:09:22.500 "write": true, 00:09:22.500 "unmap": true, 00:09:22.500 "flush": true, 00:09:22.500 "reset": true, 00:09:22.500 "nvme_admin": false, 00:09:22.500 "nvme_io": false, 00:09:22.500 "nvme_io_md": false, 00:09:22.500 "write_zeroes": true, 00:09:22.500 "zcopy": true, 00:09:22.500 "get_zone_info": false, 00:09:22.500 "zone_management": false, 00:09:22.500 "zone_append": false, 00:09:22.500 "compare": false, 00:09:22.500 "compare_and_write": false, 00:09:22.500 "abort": true, 00:09:22.500 "seek_hole": false, 00:09:22.500 "seek_data": false, 00:09:22.500 "copy": true, 00:09:22.500 "nvme_iov_md": false 00:09:22.500 }, 00:09:22.500 "memory_domains": [ 00:09:22.500 { 00:09:22.500 "dma_device_id": "system", 00:09:22.500 "dma_device_type": 1 00:09:22.500 }, 00:09:22.500 { 00:09:22.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.500 "dma_device_type": 2 00:09:22.500 } 00:09:22.500 ], 00:09:22.500 "driver_specific": {} 00:09:22.500 } 00:09:22.500 ] 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.500 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.501 "name": "Existed_Raid", 00:09:22.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.501 "strip_size_kb": 64, 00:09:22.501 "state": "configuring", 00:09:22.501 "raid_level": "concat", 00:09:22.501 "superblock": false, 00:09:22.501 "num_base_bdevs": 3, 00:09:22.501 "num_base_bdevs_discovered": 2, 00:09:22.501 "num_base_bdevs_operational": 3, 00:09:22.501 "base_bdevs_list": [ 00:09:22.501 { 00:09:22.501 "name": "BaseBdev1", 00:09:22.501 "uuid": "c9a09e4c-2960-4978-8804-9f09de875dc6", 00:09:22.501 "is_configured": true, 00:09:22.501 "data_offset": 0, 00:09:22.501 "data_size": 65536 00:09:22.501 }, 00:09:22.501 { 00:09:22.501 "name": "BaseBdev2", 00:09:22.501 "uuid": "0c581866-f119-41bd-8b29-652133e96cbb", 00:09:22.501 "is_configured": true, 00:09:22.501 "data_offset": 0, 00:09:22.501 "data_size": 65536 00:09:22.501 }, 00:09:22.501 { 00:09:22.501 "name": "BaseBdev3", 00:09:22.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.501 "is_configured": false, 00:09:22.501 "data_offset": 0, 00:09:22.501 "data_size": 0 00:09:22.501 } 00:09:22.501 ] 00:09:22.501 }' 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.501 09:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.068 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.068 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.069 [2024-11-20 09:21:48.294241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.069 [2024-11-20 09:21:48.294316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.069 [2024-11-20 09:21:48.294333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:23.069 [2024-11-20 09:21:48.294749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:23.069 [2024-11-20 09:21:48.294973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.069 [2024-11-20 09:21:48.294994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:23.069 [2024-11-20 09:21:48.295361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.069 BaseBdev3 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.069 [ 00:09:23.069 { 00:09:23.069 "name": "BaseBdev3", 00:09:23.069 "aliases": [ 00:09:23.069 "e77d0eb6-b272-4bc2-942b-286dbc54cf0d" 00:09:23.069 ], 00:09:23.069 "product_name": "Malloc disk", 00:09:23.069 "block_size": 512, 00:09:23.069 "num_blocks": 65536, 00:09:23.069 "uuid": "e77d0eb6-b272-4bc2-942b-286dbc54cf0d", 00:09:23.069 "assigned_rate_limits": { 00:09:23.069 "rw_ios_per_sec": 0, 00:09:23.069 "rw_mbytes_per_sec": 0, 00:09:23.069 "r_mbytes_per_sec": 0, 00:09:23.069 "w_mbytes_per_sec": 0 00:09:23.069 }, 00:09:23.069 "claimed": true, 00:09:23.069 "claim_type": "exclusive_write", 00:09:23.069 "zoned": false, 00:09:23.069 "supported_io_types": { 00:09:23.069 "read": true, 00:09:23.069 "write": true, 00:09:23.069 "unmap": true, 00:09:23.069 "flush": true, 00:09:23.069 "reset": true, 00:09:23.069 "nvme_admin": false, 00:09:23.069 "nvme_io": false, 00:09:23.069 "nvme_io_md": false, 00:09:23.069 "write_zeroes": true, 00:09:23.069 "zcopy": true, 00:09:23.069 "get_zone_info": false, 00:09:23.069 "zone_management": false, 00:09:23.069 "zone_append": false, 00:09:23.069 "compare": false, 00:09:23.069 "compare_and_write": false, 00:09:23.069 "abort": true, 00:09:23.069 "seek_hole": false, 00:09:23.069 "seek_data": false, 00:09:23.069 "copy": true, 00:09:23.069 "nvme_iov_md": false 00:09:23.069 }, 00:09:23.069 "memory_domains": [ 00:09:23.069 { 00:09:23.069 "dma_device_id": "system", 00:09:23.069 "dma_device_type": 1 00:09:23.069 }, 00:09:23.069 { 00:09:23.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.069 "dma_device_type": 2 00:09:23.069 } 00:09:23.069 ], 00:09:23.069 "driver_specific": {} 00:09:23.069 } 00:09:23.069 ] 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.069 "name": "Existed_Raid", 00:09:23.069 "uuid": "61485e2d-6162-4ad5-b494-0f79437ac33c", 00:09:23.069 "strip_size_kb": 64, 00:09:23.069 "state": "online", 00:09:23.069 "raid_level": "concat", 00:09:23.069 "superblock": false, 00:09:23.069 "num_base_bdevs": 3, 00:09:23.069 "num_base_bdevs_discovered": 3, 00:09:23.069 "num_base_bdevs_operational": 3, 00:09:23.069 "base_bdevs_list": [ 00:09:23.069 { 00:09:23.069 "name": "BaseBdev1", 00:09:23.069 "uuid": "c9a09e4c-2960-4978-8804-9f09de875dc6", 00:09:23.069 "is_configured": true, 00:09:23.069 "data_offset": 0, 00:09:23.069 "data_size": 65536 00:09:23.069 }, 00:09:23.069 { 00:09:23.069 "name": "BaseBdev2", 00:09:23.069 "uuid": "0c581866-f119-41bd-8b29-652133e96cbb", 00:09:23.069 "is_configured": true, 00:09:23.069 "data_offset": 0, 00:09:23.069 "data_size": 65536 00:09:23.069 }, 00:09:23.069 { 00:09:23.069 "name": "BaseBdev3", 00:09:23.069 "uuid": "e77d0eb6-b272-4bc2-942b-286dbc54cf0d", 00:09:23.069 "is_configured": true, 00:09:23.069 "data_offset": 0, 00:09:23.069 "data_size": 65536 00:09:23.069 } 00:09:23.069 ] 00:09:23.069 }' 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.069 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.328 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.328 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:23.328 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.328 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.328 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.328 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.328 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.328 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:23.328 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.587 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.587 [2024-11-20 09:21:48.789945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.587 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.587 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.587 "name": "Existed_Raid", 00:09:23.587 "aliases": [ 00:09:23.587 "61485e2d-6162-4ad5-b494-0f79437ac33c" 00:09:23.587 ], 00:09:23.587 "product_name": "Raid Volume", 00:09:23.587 "block_size": 512, 00:09:23.587 "num_blocks": 196608, 00:09:23.587 "uuid": "61485e2d-6162-4ad5-b494-0f79437ac33c", 00:09:23.587 "assigned_rate_limits": { 00:09:23.587 "rw_ios_per_sec": 0, 00:09:23.587 "rw_mbytes_per_sec": 0, 00:09:23.587 "r_mbytes_per_sec": 0, 00:09:23.587 "w_mbytes_per_sec": 0 00:09:23.587 }, 00:09:23.587 "claimed": false, 00:09:23.587 "zoned": false, 00:09:23.587 "supported_io_types": { 00:09:23.587 "read": true, 00:09:23.587 "write": true, 00:09:23.587 "unmap": true, 00:09:23.587 "flush": true, 00:09:23.587 "reset": true, 00:09:23.587 "nvme_admin": false, 00:09:23.587 "nvme_io": false, 00:09:23.587 "nvme_io_md": false, 00:09:23.587 "write_zeroes": true, 00:09:23.587 "zcopy": false, 00:09:23.587 "get_zone_info": false, 00:09:23.587 "zone_management": false, 00:09:23.587 "zone_append": false, 00:09:23.587 "compare": false, 00:09:23.587 "compare_and_write": false, 00:09:23.587 "abort": false, 00:09:23.587 "seek_hole": false, 00:09:23.587 "seek_data": false, 00:09:23.587 "copy": false, 00:09:23.587 "nvme_iov_md": false 00:09:23.587 }, 00:09:23.587 "memory_domains": [ 00:09:23.587 { 00:09:23.587 "dma_device_id": "system", 00:09:23.587 "dma_device_type": 1 00:09:23.587 }, 00:09:23.587 { 00:09:23.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.587 "dma_device_type": 2 00:09:23.587 }, 00:09:23.587 { 00:09:23.587 "dma_device_id": "system", 00:09:23.587 "dma_device_type": 1 00:09:23.587 }, 00:09:23.587 { 00:09:23.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.587 "dma_device_type": 2 00:09:23.587 }, 00:09:23.587 { 00:09:23.587 "dma_device_id": "system", 00:09:23.587 "dma_device_type": 1 00:09:23.587 }, 00:09:23.587 { 00:09:23.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.587 "dma_device_type": 2 00:09:23.587 } 00:09:23.587 ], 00:09:23.588 "driver_specific": { 00:09:23.588 "raid": { 00:09:23.588 "uuid": "61485e2d-6162-4ad5-b494-0f79437ac33c", 00:09:23.588 "strip_size_kb": 64, 00:09:23.588 "state": "online", 00:09:23.588 "raid_level": "concat", 00:09:23.588 "superblock": false, 00:09:23.588 "num_base_bdevs": 3, 00:09:23.588 "num_base_bdevs_discovered": 3, 00:09:23.588 "num_base_bdevs_operational": 3, 00:09:23.588 "base_bdevs_list": [ 00:09:23.588 { 00:09:23.588 "name": "BaseBdev1", 00:09:23.588 "uuid": "c9a09e4c-2960-4978-8804-9f09de875dc6", 00:09:23.588 "is_configured": true, 00:09:23.588 "data_offset": 0, 00:09:23.588 "data_size": 65536 00:09:23.588 }, 00:09:23.588 { 00:09:23.588 "name": "BaseBdev2", 00:09:23.588 "uuid": "0c581866-f119-41bd-8b29-652133e96cbb", 00:09:23.588 "is_configured": true, 00:09:23.588 "data_offset": 0, 00:09:23.588 "data_size": 65536 00:09:23.588 }, 00:09:23.588 { 00:09:23.588 "name": "BaseBdev3", 00:09:23.588 "uuid": "e77d0eb6-b272-4bc2-942b-286dbc54cf0d", 00:09:23.588 "is_configured": true, 00:09:23.588 "data_offset": 0, 00:09:23.588 "data_size": 65536 00:09:23.588 } 00:09:23.588 ] 00:09:23.588 } 00:09:23.588 } 00:09:23.588 }' 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:23.588 BaseBdev2 00:09:23.588 BaseBdev3' 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.588 09:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.588 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.588 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.588 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.588 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:23.588 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.588 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.588 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.588 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.858 [2024-11-20 09:21:49.069155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.858 [2024-11-20 09:21:49.069312] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.858 [2024-11-20 09:21:49.069403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.858 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.859 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.859 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.859 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.859 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.859 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.859 "name": "Existed_Raid", 00:09:23.859 "uuid": "61485e2d-6162-4ad5-b494-0f79437ac33c", 00:09:23.859 "strip_size_kb": 64, 00:09:23.859 "state": "offline", 00:09:23.859 "raid_level": "concat", 00:09:23.859 "superblock": false, 00:09:23.859 "num_base_bdevs": 3, 00:09:23.859 "num_base_bdevs_discovered": 2, 00:09:23.859 "num_base_bdevs_operational": 2, 00:09:23.859 "base_bdevs_list": [ 00:09:23.859 { 00:09:23.859 "name": null, 00:09:23.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.859 "is_configured": false, 00:09:23.859 "data_offset": 0, 00:09:23.859 "data_size": 65536 00:09:23.859 }, 00:09:23.859 { 00:09:23.859 "name": "BaseBdev2", 00:09:23.859 "uuid": "0c581866-f119-41bd-8b29-652133e96cbb", 00:09:23.859 "is_configured": true, 00:09:23.859 "data_offset": 0, 00:09:23.859 "data_size": 65536 00:09:23.859 }, 00:09:23.859 { 00:09:23.859 "name": "BaseBdev3", 00:09:23.859 "uuid": "e77d0eb6-b272-4bc2-942b-286dbc54cf0d", 00:09:23.859 "is_configured": true, 00:09:23.859 "data_offset": 0, 00:09:23.859 "data_size": 65536 00:09:23.859 } 00:09:23.859 ] 00:09:23.859 }' 00:09:23.859 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.859 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.426 [2024-11-20 09:21:49.741752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.426 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.685 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.685 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.685 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.685 09:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:24.685 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.685 09:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.685 [2024-11-20 09:21:49.926662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.685 [2024-11-20 09:21:49.926856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.685 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.943 BaseBdev2 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.943 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.943 [ 00:09:24.944 { 00:09:24.944 "name": "BaseBdev2", 00:09:24.944 "aliases": [ 00:09:24.944 "dac4b6a9-7e0d-43d8-827c-889b39434db6" 00:09:24.944 ], 00:09:24.944 "product_name": "Malloc disk", 00:09:24.944 "block_size": 512, 00:09:24.944 "num_blocks": 65536, 00:09:24.944 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:24.944 "assigned_rate_limits": { 00:09:24.944 "rw_ios_per_sec": 0, 00:09:24.944 "rw_mbytes_per_sec": 0, 00:09:24.944 "r_mbytes_per_sec": 0, 00:09:24.944 "w_mbytes_per_sec": 0 00:09:24.944 }, 00:09:24.944 "claimed": false, 00:09:24.944 "zoned": false, 00:09:24.944 "supported_io_types": { 00:09:24.944 "read": true, 00:09:24.944 "write": true, 00:09:24.944 "unmap": true, 00:09:24.944 "flush": true, 00:09:24.944 "reset": true, 00:09:24.944 "nvme_admin": false, 00:09:24.944 "nvme_io": false, 00:09:24.944 "nvme_io_md": false, 00:09:24.944 "write_zeroes": true, 00:09:24.944 "zcopy": true, 00:09:24.944 "get_zone_info": false, 00:09:24.944 "zone_management": false, 00:09:24.944 "zone_append": false, 00:09:24.944 "compare": false, 00:09:24.944 "compare_and_write": false, 00:09:24.944 "abort": true, 00:09:24.944 "seek_hole": false, 00:09:24.944 "seek_data": false, 00:09:24.944 "copy": true, 00:09:24.944 "nvme_iov_md": false 00:09:24.944 }, 00:09:24.944 "memory_domains": [ 00:09:24.944 { 00:09:24.944 "dma_device_id": "system", 00:09:24.944 "dma_device_type": 1 00:09:24.944 }, 00:09:24.944 { 00:09:24.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.944 "dma_device_type": 2 00:09:24.944 } 00:09:24.944 ], 00:09:24.944 "driver_specific": {} 00:09:24.944 } 00:09:24.944 ] 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.944 BaseBdev3 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.944 [ 00:09:24.944 { 00:09:24.944 "name": "BaseBdev3", 00:09:24.944 "aliases": [ 00:09:24.944 "e9778880-ff9d-4cb2-86ed-3d6005f2e11e" 00:09:24.944 ], 00:09:24.944 "product_name": "Malloc disk", 00:09:24.944 "block_size": 512, 00:09:24.944 "num_blocks": 65536, 00:09:24.944 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:24.944 "assigned_rate_limits": { 00:09:24.944 "rw_ios_per_sec": 0, 00:09:24.944 "rw_mbytes_per_sec": 0, 00:09:24.944 "r_mbytes_per_sec": 0, 00:09:24.944 "w_mbytes_per_sec": 0 00:09:24.944 }, 00:09:24.944 "claimed": false, 00:09:24.944 "zoned": false, 00:09:24.944 "supported_io_types": { 00:09:24.944 "read": true, 00:09:24.944 "write": true, 00:09:24.944 "unmap": true, 00:09:24.944 "flush": true, 00:09:24.944 "reset": true, 00:09:24.944 "nvme_admin": false, 00:09:24.944 "nvme_io": false, 00:09:24.944 "nvme_io_md": false, 00:09:24.944 "write_zeroes": true, 00:09:24.944 "zcopy": true, 00:09:24.944 "get_zone_info": false, 00:09:24.944 "zone_management": false, 00:09:24.944 "zone_append": false, 00:09:24.944 "compare": false, 00:09:24.944 "compare_and_write": false, 00:09:24.944 "abort": true, 00:09:24.944 "seek_hole": false, 00:09:24.944 "seek_data": false, 00:09:24.944 "copy": true, 00:09:24.944 "nvme_iov_md": false 00:09:24.944 }, 00:09:24.944 "memory_domains": [ 00:09:24.944 { 00:09:24.944 "dma_device_id": "system", 00:09:24.944 "dma_device_type": 1 00:09:24.944 }, 00:09:24.944 { 00:09:24.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.944 "dma_device_type": 2 00:09:24.944 } 00:09:24.944 ], 00:09:24.944 "driver_specific": {} 00:09:24.944 } 00:09:24.944 ] 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.944 [2024-11-20 09:21:50.306984] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.944 [2024-11-20 09:21:50.307185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.944 [2024-11-20 09:21:50.307260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.944 [2024-11-20 09:21:50.309956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.944 "name": "Existed_Raid", 00:09:24.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.944 "strip_size_kb": 64, 00:09:24.944 "state": "configuring", 00:09:24.944 "raid_level": "concat", 00:09:24.944 "superblock": false, 00:09:24.944 "num_base_bdevs": 3, 00:09:24.944 "num_base_bdevs_discovered": 2, 00:09:24.944 "num_base_bdevs_operational": 3, 00:09:24.944 "base_bdevs_list": [ 00:09:24.944 { 00:09:24.944 "name": "BaseBdev1", 00:09:24.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.944 "is_configured": false, 00:09:24.944 "data_offset": 0, 00:09:24.944 "data_size": 0 00:09:24.944 }, 00:09:24.944 { 00:09:24.944 "name": "BaseBdev2", 00:09:24.944 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:24.944 "is_configured": true, 00:09:24.944 "data_offset": 0, 00:09:24.944 "data_size": 65536 00:09:24.944 }, 00:09:24.944 { 00:09:24.944 "name": "BaseBdev3", 00:09:24.944 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:24.944 "is_configured": true, 00:09:24.944 "data_offset": 0, 00:09:24.944 "data_size": 65536 00:09:24.944 } 00:09:24.944 ] 00:09:24.944 }' 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.944 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.511 [2024-11-20 09:21:50.794118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.511 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.512 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.512 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.512 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.512 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.512 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.512 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.512 "name": "Existed_Raid", 00:09:25.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.512 "strip_size_kb": 64, 00:09:25.512 "state": "configuring", 00:09:25.512 "raid_level": "concat", 00:09:25.512 "superblock": false, 00:09:25.512 "num_base_bdevs": 3, 00:09:25.512 "num_base_bdevs_discovered": 1, 00:09:25.512 "num_base_bdevs_operational": 3, 00:09:25.512 "base_bdevs_list": [ 00:09:25.512 { 00:09:25.512 "name": "BaseBdev1", 00:09:25.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.512 "is_configured": false, 00:09:25.512 "data_offset": 0, 00:09:25.512 "data_size": 0 00:09:25.512 }, 00:09:25.512 { 00:09:25.512 "name": null, 00:09:25.512 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:25.512 "is_configured": false, 00:09:25.512 "data_offset": 0, 00:09:25.512 "data_size": 65536 00:09:25.512 }, 00:09:25.512 { 00:09:25.512 "name": "BaseBdev3", 00:09:25.512 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:25.512 "is_configured": true, 00:09:25.512 "data_offset": 0, 00:09:25.512 "data_size": 65536 00:09:25.512 } 00:09:25.512 ] 00:09:25.512 }' 00:09:25.512 09:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.512 09:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.079 [2024-11-20 09:21:51.393152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.079 BaseBdev1 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.079 [ 00:09:26.079 { 00:09:26.079 "name": "BaseBdev1", 00:09:26.079 "aliases": [ 00:09:26.079 "9070b719-f0ec-4cf7-895a-08cdcf50cf10" 00:09:26.079 ], 00:09:26.079 "product_name": "Malloc disk", 00:09:26.079 "block_size": 512, 00:09:26.079 "num_blocks": 65536, 00:09:26.079 "uuid": "9070b719-f0ec-4cf7-895a-08cdcf50cf10", 00:09:26.079 "assigned_rate_limits": { 00:09:26.079 "rw_ios_per_sec": 0, 00:09:26.079 "rw_mbytes_per_sec": 0, 00:09:26.079 "r_mbytes_per_sec": 0, 00:09:26.079 "w_mbytes_per_sec": 0 00:09:26.079 }, 00:09:26.079 "claimed": true, 00:09:26.079 "claim_type": "exclusive_write", 00:09:26.079 "zoned": false, 00:09:26.079 "supported_io_types": { 00:09:26.079 "read": true, 00:09:26.079 "write": true, 00:09:26.079 "unmap": true, 00:09:26.079 "flush": true, 00:09:26.079 "reset": true, 00:09:26.079 "nvme_admin": false, 00:09:26.079 "nvme_io": false, 00:09:26.079 "nvme_io_md": false, 00:09:26.079 "write_zeroes": true, 00:09:26.079 "zcopy": true, 00:09:26.079 "get_zone_info": false, 00:09:26.079 "zone_management": false, 00:09:26.079 "zone_append": false, 00:09:26.079 "compare": false, 00:09:26.079 "compare_and_write": false, 00:09:26.079 "abort": true, 00:09:26.079 "seek_hole": false, 00:09:26.079 "seek_data": false, 00:09:26.079 "copy": true, 00:09:26.079 "nvme_iov_md": false 00:09:26.079 }, 00:09:26.079 "memory_domains": [ 00:09:26.079 { 00:09:26.079 "dma_device_id": "system", 00:09:26.079 "dma_device_type": 1 00:09:26.079 }, 00:09:26.079 { 00:09:26.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.079 "dma_device_type": 2 00:09:26.079 } 00:09:26.079 ], 00:09:26.079 "driver_specific": {} 00:09:26.079 } 00:09:26.079 ] 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.079 "name": "Existed_Raid", 00:09:26.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.079 "strip_size_kb": 64, 00:09:26.079 "state": "configuring", 00:09:26.079 "raid_level": "concat", 00:09:26.079 "superblock": false, 00:09:26.079 "num_base_bdevs": 3, 00:09:26.079 "num_base_bdevs_discovered": 2, 00:09:26.079 "num_base_bdevs_operational": 3, 00:09:26.079 "base_bdevs_list": [ 00:09:26.079 { 00:09:26.079 "name": "BaseBdev1", 00:09:26.079 "uuid": "9070b719-f0ec-4cf7-895a-08cdcf50cf10", 00:09:26.079 "is_configured": true, 00:09:26.079 "data_offset": 0, 00:09:26.079 "data_size": 65536 00:09:26.079 }, 00:09:26.079 { 00:09:26.079 "name": null, 00:09:26.079 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:26.079 "is_configured": false, 00:09:26.079 "data_offset": 0, 00:09:26.079 "data_size": 65536 00:09:26.079 }, 00:09:26.079 { 00:09:26.079 "name": "BaseBdev3", 00:09:26.079 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:26.079 "is_configured": true, 00:09:26.079 "data_offset": 0, 00:09:26.079 "data_size": 65536 00:09:26.079 } 00:09:26.079 ] 00:09:26.079 }' 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.079 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.647 [2024-11-20 09:21:51.948405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.647 09:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.647 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.647 "name": "Existed_Raid", 00:09:26.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.647 "strip_size_kb": 64, 00:09:26.647 "state": "configuring", 00:09:26.647 "raid_level": "concat", 00:09:26.647 "superblock": false, 00:09:26.647 "num_base_bdevs": 3, 00:09:26.647 "num_base_bdevs_discovered": 1, 00:09:26.647 "num_base_bdevs_operational": 3, 00:09:26.647 "base_bdevs_list": [ 00:09:26.647 { 00:09:26.647 "name": "BaseBdev1", 00:09:26.647 "uuid": "9070b719-f0ec-4cf7-895a-08cdcf50cf10", 00:09:26.647 "is_configured": true, 00:09:26.647 "data_offset": 0, 00:09:26.647 "data_size": 65536 00:09:26.647 }, 00:09:26.647 { 00:09:26.647 "name": null, 00:09:26.647 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:26.647 "is_configured": false, 00:09:26.647 "data_offset": 0, 00:09:26.647 "data_size": 65536 00:09:26.647 }, 00:09:26.647 { 00:09:26.647 "name": null, 00:09:26.647 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:26.647 "is_configured": false, 00:09:26.647 "data_offset": 0, 00:09:26.647 "data_size": 65536 00:09:26.647 } 00:09:26.647 ] 00:09:26.647 }' 00:09:26.647 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.648 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.224 [2024-11-20 09:21:52.479759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.224 "name": "Existed_Raid", 00:09:27.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.224 "strip_size_kb": 64, 00:09:27.224 "state": "configuring", 00:09:27.224 "raid_level": "concat", 00:09:27.224 "superblock": false, 00:09:27.224 "num_base_bdevs": 3, 00:09:27.224 "num_base_bdevs_discovered": 2, 00:09:27.224 "num_base_bdevs_operational": 3, 00:09:27.224 "base_bdevs_list": [ 00:09:27.224 { 00:09:27.224 "name": "BaseBdev1", 00:09:27.224 "uuid": "9070b719-f0ec-4cf7-895a-08cdcf50cf10", 00:09:27.224 "is_configured": true, 00:09:27.224 "data_offset": 0, 00:09:27.224 "data_size": 65536 00:09:27.224 }, 00:09:27.224 { 00:09:27.224 "name": null, 00:09:27.224 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:27.224 "is_configured": false, 00:09:27.224 "data_offset": 0, 00:09:27.224 "data_size": 65536 00:09:27.224 }, 00:09:27.224 { 00:09:27.224 "name": "BaseBdev3", 00:09:27.224 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:27.224 "is_configured": true, 00:09:27.224 "data_offset": 0, 00:09:27.224 "data_size": 65536 00:09:27.224 } 00:09:27.224 ] 00:09:27.224 }' 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.224 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.805 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.805 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.805 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.805 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.805 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.805 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:27.805 09:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.806 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.806 09:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.806 [2024-11-20 09:21:53.002931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.806 "name": "Existed_Raid", 00:09:27.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.806 "strip_size_kb": 64, 00:09:27.806 "state": "configuring", 00:09:27.806 "raid_level": "concat", 00:09:27.806 "superblock": false, 00:09:27.806 "num_base_bdevs": 3, 00:09:27.806 "num_base_bdevs_discovered": 1, 00:09:27.806 "num_base_bdevs_operational": 3, 00:09:27.806 "base_bdevs_list": [ 00:09:27.806 { 00:09:27.806 "name": null, 00:09:27.806 "uuid": "9070b719-f0ec-4cf7-895a-08cdcf50cf10", 00:09:27.806 "is_configured": false, 00:09:27.806 "data_offset": 0, 00:09:27.806 "data_size": 65536 00:09:27.806 }, 00:09:27.806 { 00:09:27.806 "name": null, 00:09:27.806 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:27.806 "is_configured": false, 00:09:27.806 "data_offset": 0, 00:09:27.806 "data_size": 65536 00:09:27.806 }, 00:09:27.806 { 00:09:27.806 "name": "BaseBdev3", 00:09:27.806 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:27.806 "is_configured": true, 00:09:27.806 "data_offset": 0, 00:09:27.806 "data_size": 65536 00:09:27.806 } 00:09:27.806 ] 00:09:27.806 }' 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.806 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.374 [2024-11-20 09:21:53.613371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.374 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.375 "name": "Existed_Raid", 00:09:28.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.375 "strip_size_kb": 64, 00:09:28.375 "state": "configuring", 00:09:28.375 "raid_level": "concat", 00:09:28.375 "superblock": false, 00:09:28.375 "num_base_bdevs": 3, 00:09:28.375 "num_base_bdevs_discovered": 2, 00:09:28.375 "num_base_bdevs_operational": 3, 00:09:28.375 "base_bdevs_list": [ 00:09:28.375 { 00:09:28.375 "name": null, 00:09:28.375 "uuid": "9070b719-f0ec-4cf7-895a-08cdcf50cf10", 00:09:28.375 "is_configured": false, 00:09:28.375 "data_offset": 0, 00:09:28.375 "data_size": 65536 00:09:28.375 }, 00:09:28.375 { 00:09:28.375 "name": "BaseBdev2", 00:09:28.375 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:28.375 "is_configured": true, 00:09:28.375 "data_offset": 0, 00:09:28.375 "data_size": 65536 00:09:28.375 }, 00:09:28.375 { 00:09:28.375 "name": "BaseBdev3", 00:09:28.375 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:28.375 "is_configured": true, 00:09:28.375 "data_offset": 0, 00:09:28.375 "data_size": 65536 00:09:28.375 } 00:09:28.375 ] 00:09:28.375 }' 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.375 09:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.634 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.634 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.634 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.634 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.634 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9070b719-f0ec-4cf7-895a-08cdcf50cf10 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.894 [2024-11-20 09:21:54.209920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:28.894 [2024-11-20 09:21:54.210001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:28.894 [2024-11-20 09:21:54.210014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:28.894 [2024-11-20 09:21:54.210366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:28.894 [2024-11-20 09:21:54.210595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:28.894 [2024-11-20 09:21:54.210608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:28.894 [2024-11-20 09:21:54.210974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.894 NewBaseBdev 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.894 [ 00:09:28.894 { 00:09:28.894 "name": "NewBaseBdev", 00:09:28.894 "aliases": [ 00:09:28.894 "9070b719-f0ec-4cf7-895a-08cdcf50cf10" 00:09:28.894 ], 00:09:28.894 "product_name": "Malloc disk", 00:09:28.894 "block_size": 512, 00:09:28.894 "num_blocks": 65536, 00:09:28.894 "uuid": "9070b719-f0ec-4cf7-895a-08cdcf50cf10", 00:09:28.894 "assigned_rate_limits": { 00:09:28.894 "rw_ios_per_sec": 0, 00:09:28.894 "rw_mbytes_per_sec": 0, 00:09:28.894 "r_mbytes_per_sec": 0, 00:09:28.894 "w_mbytes_per_sec": 0 00:09:28.894 }, 00:09:28.894 "claimed": true, 00:09:28.894 "claim_type": "exclusive_write", 00:09:28.894 "zoned": false, 00:09:28.894 "supported_io_types": { 00:09:28.894 "read": true, 00:09:28.894 "write": true, 00:09:28.894 "unmap": true, 00:09:28.894 "flush": true, 00:09:28.894 "reset": true, 00:09:28.894 "nvme_admin": false, 00:09:28.894 "nvme_io": false, 00:09:28.894 "nvme_io_md": false, 00:09:28.894 "write_zeroes": true, 00:09:28.894 "zcopy": true, 00:09:28.894 "get_zone_info": false, 00:09:28.894 "zone_management": false, 00:09:28.894 "zone_append": false, 00:09:28.894 "compare": false, 00:09:28.894 "compare_and_write": false, 00:09:28.894 "abort": true, 00:09:28.894 "seek_hole": false, 00:09:28.894 "seek_data": false, 00:09:28.894 "copy": true, 00:09:28.894 "nvme_iov_md": false 00:09:28.894 }, 00:09:28.894 "memory_domains": [ 00:09:28.894 { 00:09:28.894 "dma_device_id": "system", 00:09:28.894 "dma_device_type": 1 00:09:28.894 }, 00:09:28.894 { 00:09:28.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.894 "dma_device_type": 2 00:09:28.894 } 00:09:28.894 ], 00:09:28.894 "driver_specific": {} 00:09:28.894 } 00:09:28.894 ] 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:28.894 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.895 "name": "Existed_Raid", 00:09:28.895 "uuid": "b81f2458-ab05-4a87-92cb-2e874ed98a23", 00:09:28.895 "strip_size_kb": 64, 00:09:28.895 "state": "online", 00:09:28.895 "raid_level": "concat", 00:09:28.895 "superblock": false, 00:09:28.895 "num_base_bdevs": 3, 00:09:28.895 "num_base_bdevs_discovered": 3, 00:09:28.895 "num_base_bdevs_operational": 3, 00:09:28.895 "base_bdevs_list": [ 00:09:28.895 { 00:09:28.895 "name": "NewBaseBdev", 00:09:28.895 "uuid": "9070b719-f0ec-4cf7-895a-08cdcf50cf10", 00:09:28.895 "is_configured": true, 00:09:28.895 "data_offset": 0, 00:09:28.895 "data_size": 65536 00:09:28.895 }, 00:09:28.895 { 00:09:28.895 "name": "BaseBdev2", 00:09:28.895 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:28.895 "is_configured": true, 00:09:28.895 "data_offset": 0, 00:09:28.895 "data_size": 65536 00:09:28.895 }, 00:09:28.895 { 00:09:28.895 "name": "BaseBdev3", 00:09:28.895 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:28.895 "is_configured": true, 00:09:28.895 "data_offset": 0, 00:09:28.895 "data_size": 65536 00:09:28.895 } 00:09:28.895 ] 00:09:28.895 }' 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.895 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.464 [2024-11-20 09:21:54.729588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.464 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.464 "name": "Existed_Raid", 00:09:29.464 "aliases": [ 00:09:29.464 "b81f2458-ab05-4a87-92cb-2e874ed98a23" 00:09:29.464 ], 00:09:29.464 "product_name": "Raid Volume", 00:09:29.464 "block_size": 512, 00:09:29.464 "num_blocks": 196608, 00:09:29.464 "uuid": "b81f2458-ab05-4a87-92cb-2e874ed98a23", 00:09:29.465 "assigned_rate_limits": { 00:09:29.465 "rw_ios_per_sec": 0, 00:09:29.465 "rw_mbytes_per_sec": 0, 00:09:29.465 "r_mbytes_per_sec": 0, 00:09:29.465 "w_mbytes_per_sec": 0 00:09:29.465 }, 00:09:29.465 "claimed": false, 00:09:29.465 "zoned": false, 00:09:29.465 "supported_io_types": { 00:09:29.465 "read": true, 00:09:29.465 "write": true, 00:09:29.465 "unmap": true, 00:09:29.465 "flush": true, 00:09:29.465 "reset": true, 00:09:29.465 "nvme_admin": false, 00:09:29.465 "nvme_io": false, 00:09:29.465 "nvme_io_md": false, 00:09:29.465 "write_zeroes": true, 00:09:29.465 "zcopy": false, 00:09:29.465 "get_zone_info": false, 00:09:29.465 "zone_management": false, 00:09:29.465 "zone_append": false, 00:09:29.465 "compare": false, 00:09:29.465 "compare_and_write": false, 00:09:29.465 "abort": false, 00:09:29.465 "seek_hole": false, 00:09:29.465 "seek_data": false, 00:09:29.465 "copy": false, 00:09:29.465 "nvme_iov_md": false 00:09:29.465 }, 00:09:29.465 "memory_domains": [ 00:09:29.465 { 00:09:29.465 "dma_device_id": "system", 00:09:29.465 "dma_device_type": 1 00:09:29.465 }, 00:09:29.465 { 00:09:29.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.465 "dma_device_type": 2 00:09:29.465 }, 00:09:29.465 { 00:09:29.465 "dma_device_id": "system", 00:09:29.465 "dma_device_type": 1 00:09:29.465 }, 00:09:29.465 { 00:09:29.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.465 "dma_device_type": 2 00:09:29.465 }, 00:09:29.465 { 00:09:29.465 "dma_device_id": "system", 00:09:29.465 "dma_device_type": 1 00:09:29.465 }, 00:09:29.465 { 00:09:29.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.465 "dma_device_type": 2 00:09:29.465 } 00:09:29.465 ], 00:09:29.465 "driver_specific": { 00:09:29.465 "raid": { 00:09:29.465 "uuid": "b81f2458-ab05-4a87-92cb-2e874ed98a23", 00:09:29.465 "strip_size_kb": 64, 00:09:29.465 "state": "online", 00:09:29.465 "raid_level": "concat", 00:09:29.465 "superblock": false, 00:09:29.465 "num_base_bdevs": 3, 00:09:29.465 "num_base_bdevs_discovered": 3, 00:09:29.465 "num_base_bdevs_operational": 3, 00:09:29.465 "base_bdevs_list": [ 00:09:29.465 { 00:09:29.465 "name": "NewBaseBdev", 00:09:29.465 "uuid": "9070b719-f0ec-4cf7-895a-08cdcf50cf10", 00:09:29.465 "is_configured": true, 00:09:29.465 "data_offset": 0, 00:09:29.465 "data_size": 65536 00:09:29.465 }, 00:09:29.465 { 00:09:29.465 "name": "BaseBdev2", 00:09:29.465 "uuid": "dac4b6a9-7e0d-43d8-827c-889b39434db6", 00:09:29.465 "is_configured": true, 00:09:29.465 "data_offset": 0, 00:09:29.465 "data_size": 65536 00:09:29.465 }, 00:09:29.465 { 00:09:29.465 "name": "BaseBdev3", 00:09:29.465 "uuid": "e9778880-ff9d-4cb2-86ed-3d6005f2e11e", 00:09:29.465 "is_configured": true, 00:09:29.465 "data_offset": 0, 00:09:29.465 "data_size": 65536 00:09:29.465 } 00:09:29.465 ] 00:09:29.465 } 00:09:29.465 } 00:09:29.465 }' 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:29.465 BaseBdev2 00:09:29.465 BaseBdev3' 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.465 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.725 09:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.725 [2024-11-20 09:21:55.004774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.725 [2024-11-20 09:21:55.004839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.725 [2024-11-20 09:21:55.004978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.725 [2024-11-20 09:21:55.005056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.725 [2024-11-20 09:21:55.005073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65876 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65876 ']' 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65876 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65876 00:09:29.725 killing process with pid 65876 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65876' 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65876 00:09:29.725 [2024-11-20 09:21:55.048395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.725 09:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65876 00:09:30.293 [2024-11-20 09:21:55.450514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.671 ************************************ 00:09:31.671 END TEST raid_state_function_test 00:09:31.671 ************************************ 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:31.671 00:09:31.671 real 0m11.922s 00:09:31.671 user 0m18.530s 00:09:31.671 sys 0m2.156s 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.671 09:21:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:31.671 09:21:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.671 09:21:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.671 09:21:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.671 ************************************ 00:09:31.671 START TEST raid_state_function_test_sb 00:09:31.671 ************************************ 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66515 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66515' 00:09:31.671 Process raid pid: 66515 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66515 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66515 ']' 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.671 09:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.671 [2024-11-20 09:21:57.088594] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:31.671 [2024-11-20 09:21:57.089337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.930 [2024-11-20 09:21:57.272240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.190 [2024-11-20 09:21:57.434774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.449 [2024-11-20 09:21:57.726523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.449 [2024-11-20 09:21:57.726715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.709 [2024-11-20 09:21:58.038731] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.709 [2024-11-20 09:21:58.038824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.709 [2024-11-20 09:21:58.038837] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.709 [2024-11-20 09:21:58.038850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.709 [2024-11-20 09:21:58.038857] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.709 [2024-11-20 09:21:58.038868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.709 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.709 "name": "Existed_Raid", 00:09:32.710 "uuid": "38bb5f3e-f828-4914-ae60-ec48690cfe6c", 00:09:32.710 "strip_size_kb": 64, 00:09:32.710 "state": "configuring", 00:09:32.710 "raid_level": "concat", 00:09:32.710 "superblock": true, 00:09:32.710 "num_base_bdevs": 3, 00:09:32.710 "num_base_bdevs_discovered": 0, 00:09:32.710 "num_base_bdevs_operational": 3, 00:09:32.710 "base_bdevs_list": [ 00:09:32.710 { 00:09:32.710 "name": "BaseBdev1", 00:09:32.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.710 "is_configured": false, 00:09:32.710 "data_offset": 0, 00:09:32.710 "data_size": 0 00:09:32.710 }, 00:09:32.710 { 00:09:32.710 "name": "BaseBdev2", 00:09:32.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.710 "is_configured": false, 00:09:32.710 "data_offset": 0, 00:09:32.710 "data_size": 0 00:09:32.710 }, 00:09:32.710 { 00:09:32.710 "name": "BaseBdev3", 00:09:32.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.710 "is_configured": false, 00:09:32.710 "data_offset": 0, 00:09:32.710 "data_size": 0 00:09:32.710 } 00:09:32.710 ] 00:09:32.710 }' 00:09:32.710 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.710 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.279 [2024-11-20 09:21:58.489883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.279 [2024-11-20 09:21:58.490071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.279 [2024-11-20 09:21:58.501878] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.279 [2024-11-20 09:21:58.502064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.279 [2024-11-20 09:21:58.502097] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.279 [2024-11-20 09:21:58.502125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.279 [2024-11-20 09:21:58.502147] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.279 [2024-11-20 09:21:58.502172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.279 [2024-11-20 09:21:58.564403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.279 BaseBdev1 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.279 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.279 [ 00:09:33.279 { 00:09:33.279 "name": "BaseBdev1", 00:09:33.279 "aliases": [ 00:09:33.280 "a819d910-0b48-415b-9d77-1fb10dd2a90f" 00:09:33.280 ], 00:09:33.280 "product_name": "Malloc disk", 00:09:33.280 "block_size": 512, 00:09:33.280 "num_blocks": 65536, 00:09:33.280 "uuid": "a819d910-0b48-415b-9d77-1fb10dd2a90f", 00:09:33.280 "assigned_rate_limits": { 00:09:33.280 "rw_ios_per_sec": 0, 00:09:33.280 "rw_mbytes_per_sec": 0, 00:09:33.280 "r_mbytes_per_sec": 0, 00:09:33.280 "w_mbytes_per_sec": 0 00:09:33.280 }, 00:09:33.280 "claimed": true, 00:09:33.280 "claim_type": "exclusive_write", 00:09:33.280 "zoned": false, 00:09:33.280 "supported_io_types": { 00:09:33.280 "read": true, 00:09:33.280 "write": true, 00:09:33.280 "unmap": true, 00:09:33.280 "flush": true, 00:09:33.280 "reset": true, 00:09:33.280 "nvme_admin": false, 00:09:33.280 "nvme_io": false, 00:09:33.280 "nvme_io_md": false, 00:09:33.280 "write_zeroes": true, 00:09:33.280 "zcopy": true, 00:09:33.280 "get_zone_info": false, 00:09:33.280 "zone_management": false, 00:09:33.280 "zone_append": false, 00:09:33.280 "compare": false, 00:09:33.280 "compare_and_write": false, 00:09:33.280 "abort": true, 00:09:33.280 "seek_hole": false, 00:09:33.280 "seek_data": false, 00:09:33.280 "copy": true, 00:09:33.280 "nvme_iov_md": false 00:09:33.280 }, 00:09:33.280 "memory_domains": [ 00:09:33.280 { 00:09:33.280 "dma_device_id": "system", 00:09:33.280 "dma_device_type": 1 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.280 "dma_device_type": 2 00:09:33.280 } 00:09:33.280 ], 00:09:33.280 "driver_specific": {} 00:09:33.280 } 00:09:33.280 ] 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.280 "name": "Existed_Raid", 00:09:33.280 "uuid": "1124b1c1-a9f6-4cc1-9f68-2535d305f1d2", 00:09:33.280 "strip_size_kb": 64, 00:09:33.280 "state": "configuring", 00:09:33.280 "raid_level": "concat", 00:09:33.280 "superblock": true, 00:09:33.280 "num_base_bdevs": 3, 00:09:33.280 "num_base_bdevs_discovered": 1, 00:09:33.280 "num_base_bdevs_operational": 3, 00:09:33.280 "base_bdevs_list": [ 00:09:33.280 { 00:09:33.280 "name": "BaseBdev1", 00:09:33.280 "uuid": "a819d910-0b48-415b-9d77-1fb10dd2a90f", 00:09:33.280 "is_configured": true, 00:09:33.280 "data_offset": 2048, 00:09:33.280 "data_size": 63488 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "name": "BaseBdev2", 00:09:33.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.280 "is_configured": false, 00:09:33.280 "data_offset": 0, 00:09:33.280 "data_size": 0 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "name": "BaseBdev3", 00:09:33.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.280 "is_configured": false, 00:09:33.280 "data_offset": 0, 00:09:33.280 "data_size": 0 00:09:33.280 } 00:09:33.280 ] 00:09:33.280 }' 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.280 09:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.862 [2024-11-20 09:21:59.123741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.862 [2024-11-20 09:21:59.123835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.862 [2024-11-20 09:21:59.135800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.862 [2024-11-20 09:21:59.138577] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.862 [2024-11-20 09:21:59.138639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.862 [2024-11-20 09:21:59.138653] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.862 [2024-11-20 09:21:59.138663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.862 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.863 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.863 "name": "Existed_Raid", 00:09:33.863 "uuid": "64c213cb-ca54-4801-a72a-5d27e3832fa0", 00:09:33.863 "strip_size_kb": 64, 00:09:33.863 "state": "configuring", 00:09:33.863 "raid_level": "concat", 00:09:33.863 "superblock": true, 00:09:33.863 "num_base_bdevs": 3, 00:09:33.863 "num_base_bdevs_discovered": 1, 00:09:33.863 "num_base_bdevs_operational": 3, 00:09:33.863 "base_bdevs_list": [ 00:09:33.863 { 00:09:33.863 "name": "BaseBdev1", 00:09:33.863 "uuid": "a819d910-0b48-415b-9d77-1fb10dd2a90f", 00:09:33.863 "is_configured": true, 00:09:33.863 "data_offset": 2048, 00:09:33.863 "data_size": 63488 00:09:33.863 }, 00:09:33.863 { 00:09:33.863 "name": "BaseBdev2", 00:09:33.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.863 "is_configured": false, 00:09:33.863 "data_offset": 0, 00:09:33.863 "data_size": 0 00:09:33.864 }, 00:09:33.864 { 00:09:33.864 "name": "BaseBdev3", 00:09:33.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.864 "is_configured": false, 00:09:33.864 "data_offset": 0, 00:09:33.864 "data_size": 0 00:09:33.864 } 00:09:33.864 ] 00:09:33.864 }' 00:09:33.864 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.864 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.436 [2024-11-20 09:21:59.678871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.436 BaseBdev2 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.436 [ 00:09:34.436 { 00:09:34.436 "name": "BaseBdev2", 00:09:34.436 "aliases": [ 00:09:34.436 "5d77ffd7-b73a-4c8b-a803-d0ace8de3dd8" 00:09:34.436 ], 00:09:34.436 "product_name": "Malloc disk", 00:09:34.436 "block_size": 512, 00:09:34.436 "num_blocks": 65536, 00:09:34.436 "uuid": "5d77ffd7-b73a-4c8b-a803-d0ace8de3dd8", 00:09:34.436 "assigned_rate_limits": { 00:09:34.436 "rw_ios_per_sec": 0, 00:09:34.436 "rw_mbytes_per_sec": 0, 00:09:34.436 "r_mbytes_per_sec": 0, 00:09:34.436 "w_mbytes_per_sec": 0 00:09:34.436 }, 00:09:34.436 "claimed": true, 00:09:34.436 "claim_type": "exclusive_write", 00:09:34.436 "zoned": false, 00:09:34.436 "supported_io_types": { 00:09:34.436 "read": true, 00:09:34.436 "write": true, 00:09:34.436 "unmap": true, 00:09:34.436 "flush": true, 00:09:34.436 "reset": true, 00:09:34.436 "nvme_admin": false, 00:09:34.436 "nvme_io": false, 00:09:34.436 "nvme_io_md": false, 00:09:34.436 "write_zeroes": true, 00:09:34.436 "zcopy": true, 00:09:34.436 "get_zone_info": false, 00:09:34.436 "zone_management": false, 00:09:34.436 "zone_append": false, 00:09:34.436 "compare": false, 00:09:34.436 "compare_and_write": false, 00:09:34.436 "abort": true, 00:09:34.436 "seek_hole": false, 00:09:34.436 "seek_data": false, 00:09:34.436 "copy": true, 00:09:34.436 "nvme_iov_md": false 00:09:34.436 }, 00:09:34.436 "memory_domains": [ 00:09:34.436 { 00:09:34.436 "dma_device_id": "system", 00:09:34.436 "dma_device_type": 1 00:09:34.436 }, 00:09:34.436 { 00:09:34.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.436 "dma_device_type": 2 00:09:34.436 } 00:09:34.436 ], 00:09:34.436 "driver_specific": {} 00:09:34.436 } 00:09:34.436 ] 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.436 "name": "Existed_Raid", 00:09:34.436 "uuid": "64c213cb-ca54-4801-a72a-5d27e3832fa0", 00:09:34.436 "strip_size_kb": 64, 00:09:34.436 "state": "configuring", 00:09:34.436 "raid_level": "concat", 00:09:34.436 "superblock": true, 00:09:34.436 "num_base_bdevs": 3, 00:09:34.436 "num_base_bdevs_discovered": 2, 00:09:34.436 "num_base_bdevs_operational": 3, 00:09:34.436 "base_bdevs_list": [ 00:09:34.436 { 00:09:34.436 "name": "BaseBdev1", 00:09:34.436 "uuid": "a819d910-0b48-415b-9d77-1fb10dd2a90f", 00:09:34.436 "is_configured": true, 00:09:34.436 "data_offset": 2048, 00:09:34.436 "data_size": 63488 00:09:34.436 }, 00:09:34.436 { 00:09:34.436 "name": "BaseBdev2", 00:09:34.436 "uuid": "5d77ffd7-b73a-4c8b-a803-d0ace8de3dd8", 00:09:34.436 "is_configured": true, 00:09:34.436 "data_offset": 2048, 00:09:34.436 "data_size": 63488 00:09:34.436 }, 00:09:34.436 { 00:09:34.436 "name": "BaseBdev3", 00:09:34.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.436 "is_configured": false, 00:09:34.436 "data_offset": 0, 00:09:34.436 "data_size": 0 00:09:34.436 } 00:09:34.436 ] 00:09:34.436 }' 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.436 09:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.007 [2024-11-20 09:22:00.258818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.007 [2024-11-20 09:22:00.259331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.007 [2024-11-20 09:22:00.259369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.007 [2024-11-20 09:22:00.259764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.007 [2024-11-20 09:22:00.259972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.007 BaseBdev3 00:09:35.007 [2024-11-20 09:22:00.260045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.007 [2024-11-20 09:22:00.260241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.007 [ 00:09:35.007 { 00:09:35.007 "name": "BaseBdev3", 00:09:35.007 "aliases": [ 00:09:35.007 "ec0bd44a-2d26-4579-84a8-be32019d7738" 00:09:35.007 ], 00:09:35.007 "product_name": "Malloc disk", 00:09:35.007 "block_size": 512, 00:09:35.007 "num_blocks": 65536, 00:09:35.007 "uuid": "ec0bd44a-2d26-4579-84a8-be32019d7738", 00:09:35.007 "assigned_rate_limits": { 00:09:35.007 "rw_ios_per_sec": 0, 00:09:35.007 "rw_mbytes_per_sec": 0, 00:09:35.007 "r_mbytes_per_sec": 0, 00:09:35.007 "w_mbytes_per_sec": 0 00:09:35.007 }, 00:09:35.007 "claimed": true, 00:09:35.007 "claim_type": "exclusive_write", 00:09:35.007 "zoned": false, 00:09:35.007 "supported_io_types": { 00:09:35.007 "read": true, 00:09:35.007 "write": true, 00:09:35.007 "unmap": true, 00:09:35.007 "flush": true, 00:09:35.007 "reset": true, 00:09:35.007 "nvme_admin": false, 00:09:35.007 "nvme_io": false, 00:09:35.007 "nvme_io_md": false, 00:09:35.007 "write_zeroes": true, 00:09:35.007 "zcopy": true, 00:09:35.007 "get_zone_info": false, 00:09:35.007 "zone_management": false, 00:09:35.007 "zone_append": false, 00:09:35.007 "compare": false, 00:09:35.007 "compare_and_write": false, 00:09:35.007 "abort": true, 00:09:35.007 "seek_hole": false, 00:09:35.007 "seek_data": false, 00:09:35.007 "copy": true, 00:09:35.007 "nvme_iov_md": false 00:09:35.007 }, 00:09:35.007 "memory_domains": [ 00:09:35.007 { 00:09:35.007 "dma_device_id": "system", 00:09:35.007 "dma_device_type": 1 00:09:35.007 }, 00:09:35.007 { 00:09:35.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.007 "dma_device_type": 2 00:09:35.007 } 00:09:35.007 ], 00:09:35.007 "driver_specific": {} 00:09:35.007 } 00:09:35.007 ] 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.007 "name": "Existed_Raid", 00:09:35.007 "uuid": "64c213cb-ca54-4801-a72a-5d27e3832fa0", 00:09:35.007 "strip_size_kb": 64, 00:09:35.007 "state": "online", 00:09:35.007 "raid_level": "concat", 00:09:35.007 "superblock": true, 00:09:35.007 "num_base_bdevs": 3, 00:09:35.007 "num_base_bdevs_discovered": 3, 00:09:35.007 "num_base_bdevs_operational": 3, 00:09:35.007 "base_bdevs_list": [ 00:09:35.007 { 00:09:35.007 "name": "BaseBdev1", 00:09:35.007 "uuid": "a819d910-0b48-415b-9d77-1fb10dd2a90f", 00:09:35.007 "is_configured": true, 00:09:35.007 "data_offset": 2048, 00:09:35.007 "data_size": 63488 00:09:35.007 }, 00:09:35.007 { 00:09:35.007 "name": "BaseBdev2", 00:09:35.007 "uuid": "5d77ffd7-b73a-4c8b-a803-d0ace8de3dd8", 00:09:35.007 "is_configured": true, 00:09:35.007 "data_offset": 2048, 00:09:35.007 "data_size": 63488 00:09:35.007 }, 00:09:35.007 { 00:09:35.007 "name": "BaseBdev3", 00:09:35.007 "uuid": "ec0bd44a-2d26-4579-84a8-be32019d7738", 00:09:35.007 "is_configured": true, 00:09:35.007 "data_offset": 2048, 00:09:35.007 "data_size": 63488 00:09:35.007 } 00:09:35.007 ] 00:09:35.007 }' 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.007 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.576 [2024-11-20 09:22:00.790664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.576 "name": "Existed_Raid", 00:09:35.576 "aliases": [ 00:09:35.576 "64c213cb-ca54-4801-a72a-5d27e3832fa0" 00:09:35.576 ], 00:09:35.576 "product_name": "Raid Volume", 00:09:35.576 "block_size": 512, 00:09:35.576 "num_blocks": 190464, 00:09:35.576 "uuid": "64c213cb-ca54-4801-a72a-5d27e3832fa0", 00:09:35.576 "assigned_rate_limits": { 00:09:35.576 "rw_ios_per_sec": 0, 00:09:35.576 "rw_mbytes_per_sec": 0, 00:09:35.576 "r_mbytes_per_sec": 0, 00:09:35.576 "w_mbytes_per_sec": 0 00:09:35.576 }, 00:09:35.576 "claimed": false, 00:09:35.576 "zoned": false, 00:09:35.576 "supported_io_types": { 00:09:35.576 "read": true, 00:09:35.576 "write": true, 00:09:35.576 "unmap": true, 00:09:35.576 "flush": true, 00:09:35.576 "reset": true, 00:09:35.576 "nvme_admin": false, 00:09:35.576 "nvme_io": false, 00:09:35.576 "nvme_io_md": false, 00:09:35.576 "write_zeroes": true, 00:09:35.576 "zcopy": false, 00:09:35.576 "get_zone_info": false, 00:09:35.576 "zone_management": false, 00:09:35.576 "zone_append": false, 00:09:35.576 "compare": false, 00:09:35.576 "compare_and_write": false, 00:09:35.576 "abort": false, 00:09:35.576 "seek_hole": false, 00:09:35.576 "seek_data": false, 00:09:35.576 "copy": false, 00:09:35.576 "nvme_iov_md": false 00:09:35.576 }, 00:09:35.576 "memory_domains": [ 00:09:35.576 { 00:09:35.576 "dma_device_id": "system", 00:09:35.576 "dma_device_type": 1 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.576 "dma_device_type": 2 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "dma_device_id": "system", 00:09:35.576 "dma_device_type": 1 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.576 "dma_device_type": 2 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "dma_device_id": "system", 00:09:35.576 "dma_device_type": 1 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.576 "dma_device_type": 2 00:09:35.576 } 00:09:35.576 ], 00:09:35.576 "driver_specific": { 00:09:35.576 "raid": { 00:09:35.576 "uuid": "64c213cb-ca54-4801-a72a-5d27e3832fa0", 00:09:35.576 "strip_size_kb": 64, 00:09:35.576 "state": "online", 00:09:35.576 "raid_level": "concat", 00:09:35.576 "superblock": true, 00:09:35.576 "num_base_bdevs": 3, 00:09:35.576 "num_base_bdevs_discovered": 3, 00:09:35.576 "num_base_bdevs_operational": 3, 00:09:35.576 "base_bdevs_list": [ 00:09:35.576 { 00:09:35.576 "name": "BaseBdev1", 00:09:35.576 "uuid": "a819d910-0b48-415b-9d77-1fb10dd2a90f", 00:09:35.576 "is_configured": true, 00:09:35.576 "data_offset": 2048, 00:09:35.576 "data_size": 63488 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "name": "BaseBdev2", 00:09:35.576 "uuid": "5d77ffd7-b73a-4c8b-a803-d0ace8de3dd8", 00:09:35.576 "is_configured": true, 00:09:35.576 "data_offset": 2048, 00:09:35.576 "data_size": 63488 00:09:35.576 }, 00:09:35.576 { 00:09:35.576 "name": "BaseBdev3", 00:09:35.576 "uuid": "ec0bd44a-2d26-4579-84a8-be32019d7738", 00:09:35.576 "is_configured": true, 00:09:35.576 "data_offset": 2048, 00:09:35.576 "data_size": 63488 00:09:35.576 } 00:09:35.576 ] 00:09:35.576 } 00:09:35.576 } 00:09:35.576 }' 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.576 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.576 BaseBdev2 00:09:35.576 BaseBdev3' 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.577 09:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.577 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.577 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.577 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.577 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.577 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.577 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.836 [2024-11-20 09:22:01.077828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.836 [2024-11-20 09:22:01.077970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.836 [2024-11-20 09:22:01.078052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.836 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.837 "name": "Existed_Raid", 00:09:35.837 "uuid": "64c213cb-ca54-4801-a72a-5d27e3832fa0", 00:09:35.837 "strip_size_kb": 64, 00:09:35.837 "state": "offline", 00:09:35.837 "raid_level": "concat", 00:09:35.837 "superblock": true, 00:09:35.837 "num_base_bdevs": 3, 00:09:35.837 "num_base_bdevs_discovered": 2, 00:09:35.837 "num_base_bdevs_operational": 2, 00:09:35.837 "base_bdevs_list": [ 00:09:35.837 { 00:09:35.837 "name": null, 00:09:35.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.837 "is_configured": false, 00:09:35.837 "data_offset": 0, 00:09:35.837 "data_size": 63488 00:09:35.837 }, 00:09:35.837 { 00:09:35.837 "name": "BaseBdev2", 00:09:35.837 "uuid": "5d77ffd7-b73a-4c8b-a803-d0ace8de3dd8", 00:09:35.837 "is_configured": true, 00:09:35.837 "data_offset": 2048, 00:09:35.837 "data_size": 63488 00:09:35.837 }, 00:09:35.837 { 00:09:35.837 "name": "BaseBdev3", 00:09:35.837 "uuid": "ec0bd44a-2d26-4579-84a8-be32019d7738", 00:09:35.837 "is_configured": true, 00:09:35.837 "data_offset": 2048, 00:09:35.837 "data_size": 63488 00:09:35.837 } 00:09:35.837 ] 00:09:35.837 }' 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.837 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.406 [2024-11-20 09:22:01.720540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.406 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.665 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.665 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.665 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.665 09:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.665 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.665 09:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.665 [2024-11-20 09:22:01.906779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.665 [2024-11-20 09:22:01.906870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.665 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.925 BaseBdev2 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.925 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.925 [ 00:09:36.925 { 00:09:36.925 "name": "BaseBdev2", 00:09:36.925 "aliases": [ 00:09:36.925 "d0513f57-a7f1-42b4-bf33-e05142f188a1" 00:09:36.925 ], 00:09:36.925 "product_name": "Malloc disk", 00:09:36.925 "block_size": 512, 00:09:36.925 "num_blocks": 65536, 00:09:36.925 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:36.925 "assigned_rate_limits": { 00:09:36.925 "rw_ios_per_sec": 0, 00:09:36.925 "rw_mbytes_per_sec": 0, 00:09:36.925 "r_mbytes_per_sec": 0, 00:09:36.925 "w_mbytes_per_sec": 0 00:09:36.925 }, 00:09:36.925 "claimed": false, 00:09:36.925 "zoned": false, 00:09:36.925 "supported_io_types": { 00:09:36.925 "read": true, 00:09:36.925 "write": true, 00:09:36.925 "unmap": true, 00:09:36.925 "flush": true, 00:09:36.925 "reset": true, 00:09:36.926 "nvme_admin": false, 00:09:36.926 "nvme_io": false, 00:09:36.926 "nvme_io_md": false, 00:09:36.926 "write_zeroes": true, 00:09:36.926 "zcopy": true, 00:09:36.926 "get_zone_info": false, 00:09:36.926 "zone_management": false, 00:09:36.926 "zone_append": false, 00:09:36.926 "compare": false, 00:09:36.926 "compare_and_write": false, 00:09:36.926 "abort": true, 00:09:36.926 "seek_hole": false, 00:09:36.926 "seek_data": false, 00:09:36.926 "copy": true, 00:09:36.926 "nvme_iov_md": false 00:09:36.926 }, 00:09:36.926 "memory_domains": [ 00:09:36.926 { 00:09:36.926 "dma_device_id": "system", 00:09:36.926 "dma_device_type": 1 00:09:36.926 }, 00:09:36.926 { 00:09:36.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.926 "dma_device_type": 2 00:09:36.926 } 00:09:36.926 ], 00:09:36.926 "driver_specific": {} 00:09:36.926 } 00:09:36.926 ] 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.926 BaseBdev3 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.926 [ 00:09:36.926 { 00:09:36.926 "name": "BaseBdev3", 00:09:36.926 "aliases": [ 00:09:36.926 "908fe935-b389-4d4c-b185-5ed564cfad0b" 00:09:36.926 ], 00:09:36.926 "product_name": "Malloc disk", 00:09:36.926 "block_size": 512, 00:09:36.926 "num_blocks": 65536, 00:09:36.926 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:36.926 "assigned_rate_limits": { 00:09:36.926 "rw_ios_per_sec": 0, 00:09:36.926 "rw_mbytes_per_sec": 0, 00:09:36.926 "r_mbytes_per_sec": 0, 00:09:36.926 "w_mbytes_per_sec": 0 00:09:36.926 }, 00:09:36.926 "claimed": false, 00:09:36.926 "zoned": false, 00:09:36.926 "supported_io_types": { 00:09:36.926 "read": true, 00:09:36.926 "write": true, 00:09:36.926 "unmap": true, 00:09:36.926 "flush": true, 00:09:36.926 "reset": true, 00:09:36.926 "nvme_admin": false, 00:09:36.926 "nvme_io": false, 00:09:36.926 "nvme_io_md": false, 00:09:36.926 "write_zeroes": true, 00:09:36.926 "zcopy": true, 00:09:36.926 "get_zone_info": false, 00:09:36.926 "zone_management": false, 00:09:36.926 "zone_append": false, 00:09:36.926 "compare": false, 00:09:36.926 "compare_and_write": false, 00:09:36.926 "abort": true, 00:09:36.926 "seek_hole": false, 00:09:36.926 "seek_data": false, 00:09:36.926 "copy": true, 00:09:36.926 "nvme_iov_md": false 00:09:36.926 }, 00:09:36.926 "memory_domains": [ 00:09:36.926 { 00:09:36.926 "dma_device_id": "system", 00:09:36.926 "dma_device_type": 1 00:09:36.926 }, 00:09:36.926 { 00:09:36.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.926 "dma_device_type": 2 00:09:36.926 } 00:09:36.926 ], 00:09:36.926 "driver_specific": {} 00:09:36.926 } 00:09:36.926 ] 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.926 [2024-11-20 09:22:02.284249] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.926 [2024-11-20 09:22:02.284445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.926 [2024-11-20 09:22:02.284503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.926 [2024-11-20 09:22:02.287008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.926 "name": "Existed_Raid", 00:09:36.926 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:36.926 "strip_size_kb": 64, 00:09:36.926 "state": "configuring", 00:09:36.926 "raid_level": "concat", 00:09:36.926 "superblock": true, 00:09:36.926 "num_base_bdevs": 3, 00:09:36.926 "num_base_bdevs_discovered": 2, 00:09:36.926 "num_base_bdevs_operational": 3, 00:09:36.926 "base_bdevs_list": [ 00:09:36.926 { 00:09:36.926 "name": "BaseBdev1", 00:09:36.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.926 "is_configured": false, 00:09:36.926 "data_offset": 0, 00:09:36.926 "data_size": 0 00:09:36.926 }, 00:09:36.926 { 00:09:36.926 "name": "BaseBdev2", 00:09:36.926 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:36.926 "is_configured": true, 00:09:36.926 "data_offset": 2048, 00:09:36.926 "data_size": 63488 00:09:36.926 }, 00:09:36.926 { 00:09:36.926 "name": "BaseBdev3", 00:09:36.926 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:36.926 "is_configured": true, 00:09:36.926 "data_offset": 2048, 00:09:36.926 "data_size": 63488 00:09:36.926 } 00:09:36.926 ] 00:09:36.926 }' 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.926 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.518 [2024-11-20 09:22:02.767513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.518 "name": "Existed_Raid", 00:09:37.518 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:37.518 "strip_size_kb": 64, 00:09:37.518 "state": "configuring", 00:09:37.518 "raid_level": "concat", 00:09:37.518 "superblock": true, 00:09:37.518 "num_base_bdevs": 3, 00:09:37.518 "num_base_bdevs_discovered": 1, 00:09:37.518 "num_base_bdevs_operational": 3, 00:09:37.518 "base_bdevs_list": [ 00:09:37.518 { 00:09:37.518 "name": "BaseBdev1", 00:09:37.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.518 "is_configured": false, 00:09:37.518 "data_offset": 0, 00:09:37.518 "data_size": 0 00:09:37.518 }, 00:09:37.518 { 00:09:37.518 "name": null, 00:09:37.518 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:37.518 "is_configured": false, 00:09:37.518 "data_offset": 0, 00:09:37.518 "data_size": 63488 00:09:37.518 }, 00:09:37.518 { 00:09:37.518 "name": "BaseBdev3", 00:09:37.518 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:37.518 "is_configured": true, 00:09:37.518 "data_offset": 2048, 00:09:37.518 "data_size": 63488 00:09:37.518 } 00:09:37.518 ] 00:09:37.518 }' 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.518 09:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.778 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.778 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.778 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.778 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.778 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.778 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:37.778 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.778 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.778 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.037 [2024-11-20 09:22:03.275932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.037 BaseBdev1 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.037 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.037 [ 00:09:38.037 { 00:09:38.037 "name": "BaseBdev1", 00:09:38.037 "aliases": [ 00:09:38.037 "f6c21525-0cf8-42c8-ace9-13562ba93c68" 00:09:38.037 ], 00:09:38.037 "product_name": "Malloc disk", 00:09:38.037 "block_size": 512, 00:09:38.037 "num_blocks": 65536, 00:09:38.037 "uuid": "f6c21525-0cf8-42c8-ace9-13562ba93c68", 00:09:38.037 "assigned_rate_limits": { 00:09:38.037 "rw_ios_per_sec": 0, 00:09:38.037 "rw_mbytes_per_sec": 0, 00:09:38.037 "r_mbytes_per_sec": 0, 00:09:38.037 "w_mbytes_per_sec": 0 00:09:38.037 }, 00:09:38.037 "claimed": true, 00:09:38.037 "claim_type": "exclusive_write", 00:09:38.037 "zoned": false, 00:09:38.038 "supported_io_types": { 00:09:38.038 "read": true, 00:09:38.038 "write": true, 00:09:38.038 "unmap": true, 00:09:38.038 "flush": true, 00:09:38.038 "reset": true, 00:09:38.038 "nvme_admin": false, 00:09:38.038 "nvme_io": false, 00:09:38.038 "nvme_io_md": false, 00:09:38.038 "write_zeroes": true, 00:09:38.038 "zcopy": true, 00:09:38.038 "get_zone_info": false, 00:09:38.038 "zone_management": false, 00:09:38.038 "zone_append": false, 00:09:38.038 "compare": false, 00:09:38.038 "compare_and_write": false, 00:09:38.038 "abort": true, 00:09:38.038 "seek_hole": false, 00:09:38.038 "seek_data": false, 00:09:38.038 "copy": true, 00:09:38.038 "nvme_iov_md": false 00:09:38.038 }, 00:09:38.038 "memory_domains": [ 00:09:38.038 { 00:09:38.038 "dma_device_id": "system", 00:09:38.038 "dma_device_type": 1 00:09:38.038 }, 00:09:38.038 { 00:09:38.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.038 "dma_device_type": 2 00:09:38.038 } 00:09:38.038 ], 00:09:38.038 "driver_specific": {} 00:09:38.038 } 00:09:38.038 ] 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.038 "name": "Existed_Raid", 00:09:38.038 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:38.038 "strip_size_kb": 64, 00:09:38.038 "state": "configuring", 00:09:38.038 "raid_level": "concat", 00:09:38.038 "superblock": true, 00:09:38.038 "num_base_bdevs": 3, 00:09:38.038 "num_base_bdevs_discovered": 2, 00:09:38.038 "num_base_bdevs_operational": 3, 00:09:38.038 "base_bdevs_list": [ 00:09:38.038 { 00:09:38.038 "name": "BaseBdev1", 00:09:38.038 "uuid": "f6c21525-0cf8-42c8-ace9-13562ba93c68", 00:09:38.038 "is_configured": true, 00:09:38.038 "data_offset": 2048, 00:09:38.038 "data_size": 63488 00:09:38.038 }, 00:09:38.038 { 00:09:38.038 "name": null, 00:09:38.038 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:38.038 "is_configured": false, 00:09:38.038 "data_offset": 0, 00:09:38.038 "data_size": 63488 00:09:38.038 }, 00:09:38.038 { 00:09:38.038 "name": "BaseBdev3", 00:09:38.038 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:38.038 "is_configured": true, 00:09:38.038 "data_offset": 2048, 00:09:38.038 "data_size": 63488 00:09:38.038 } 00:09:38.038 ] 00:09:38.038 }' 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.038 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.298 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.298 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.298 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.298 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.557 [2024-11-20 09:22:03.803177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.557 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.557 "name": "Existed_Raid", 00:09:38.557 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:38.557 "strip_size_kb": 64, 00:09:38.557 "state": "configuring", 00:09:38.557 "raid_level": "concat", 00:09:38.557 "superblock": true, 00:09:38.557 "num_base_bdevs": 3, 00:09:38.557 "num_base_bdevs_discovered": 1, 00:09:38.557 "num_base_bdevs_operational": 3, 00:09:38.557 "base_bdevs_list": [ 00:09:38.557 { 00:09:38.557 "name": "BaseBdev1", 00:09:38.557 "uuid": "f6c21525-0cf8-42c8-ace9-13562ba93c68", 00:09:38.557 "is_configured": true, 00:09:38.557 "data_offset": 2048, 00:09:38.557 "data_size": 63488 00:09:38.557 }, 00:09:38.557 { 00:09:38.557 "name": null, 00:09:38.557 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:38.557 "is_configured": false, 00:09:38.557 "data_offset": 0, 00:09:38.557 "data_size": 63488 00:09:38.557 }, 00:09:38.557 { 00:09:38.557 "name": null, 00:09:38.557 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:38.557 "is_configured": false, 00:09:38.558 "data_offset": 0, 00:09:38.558 "data_size": 63488 00:09:38.558 } 00:09:38.558 ] 00:09:38.558 }' 00:09:38.558 09:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.558 09:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.817 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.817 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.817 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.817 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.817 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.077 [2024-11-20 09:22:04.282502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.077 "name": "Existed_Raid", 00:09:39.077 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:39.077 "strip_size_kb": 64, 00:09:39.077 "state": "configuring", 00:09:39.077 "raid_level": "concat", 00:09:39.077 "superblock": true, 00:09:39.077 "num_base_bdevs": 3, 00:09:39.077 "num_base_bdevs_discovered": 2, 00:09:39.077 "num_base_bdevs_operational": 3, 00:09:39.077 "base_bdevs_list": [ 00:09:39.077 { 00:09:39.077 "name": "BaseBdev1", 00:09:39.077 "uuid": "f6c21525-0cf8-42c8-ace9-13562ba93c68", 00:09:39.077 "is_configured": true, 00:09:39.077 "data_offset": 2048, 00:09:39.077 "data_size": 63488 00:09:39.077 }, 00:09:39.077 { 00:09:39.077 "name": null, 00:09:39.077 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:39.077 "is_configured": false, 00:09:39.077 "data_offset": 0, 00:09:39.077 "data_size": 63488 00:09:39.077 }, 00:09:39.077 { 00:09:39.077 "name": "BaseBdev3", 00:09:39.077 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:39.077 "is_configured": true, 00:09:39.077 "data_offset": 2048, 00:09:39.077 "data_size": 63488 00:09:39.077 } 00:09:39.077 ] 00:09:39.077 }' 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.077 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.336 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.336 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.336 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.336 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.336 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.336 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:39.336 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.336 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.336 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.336 [2024-11-20 09:22:04.769685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.595 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.596 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.596 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.596 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.596 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.596 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.596 "name": "Existed_Raid", 00:09:39.596 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:39.596 "strip_size_kb": 64, 00:09:39.596 "state": "configuring", 00:09:39.596 "raid_level": "concat", 00:09:39.596 "superblock": true, 00:09:39.596 "num_base_bdevs": 3, 00:09:39.596 "num_base_bdevs_discovered": 1, 00:09:39.596 "num_base_bdevs_operational": 3, 00:09:39.596 "base_bdevs_list": [ 00:09:39.596 { 00:09:39.596 "name": null, 00:09:39.596 "uuid": "f6c21525-0cf8-42c8-ace9-13562ba93c68", 00:09:39.596 "is_configured": false, 00:09:39.596 "data_offset": 0, 00:09:39.596 "data_size": 63488 00:09:39.596 }, 00:09:39.596 { 00:09:39.596 "name": null, 00:09:39.596 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:39.596 "is_configured": false, 00:09:39.596 "data_offset": 0, 00:09:39.596 "data_size": 63488 00:09:39.596 }, 00:09:39.596 { 00:09:39.596 "name": "BaseBdev3", 00:09:39.596 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:39.596 "is_configured": true, 00:09:39.596 "data_offset": 2048, 00:09:39.596 "data_size": 63488 00:09:39.596 } 00:09:39.596 ] 00:09:39.596 }' 00:09:39.596 09:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.596 09:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 [2024-11-20 09:22:05.399785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.165 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.165 "name": "Existed_Raid", 00:09:40.165 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:40.165 "strip_size_kb": 64, 00:09:40.165 "state": "configuring", 00:09:40.165 "raid_level": "concat", 00:09:40.165 "superblock": true, 00:09:40.165 "num_base_bdevs": 3, 00:09:40.165 "num_base_bdevs_discovered": 2, 00:09:40.165 "num_base_bdevs_operational": 3, 00:09:40.165 "base_bdevs_list": [ 00:09:40.165 { 00:09:40.165 "name": null, 00:09:40.165 "uuid": "f6c21525-0cf8-42c8-ace9-13562ba93c68", 00:09:40.166 "is_configured": false, 00:09:40.166 "data_offset": 0, 00:09:40.166 "data_size": 63488 00:09:40.166 }, 00:09:40.166 { 00:09:40.166 "name": "BaseBdev2", 00:09:40.166 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:40.166 "is_configured": true, 00:09:40.166 "data_offset": 2048, 00:09:40.166 "data_size": 63488 00:09:40.166 }, 00:09:40.166 { 00:09:40.166 "name": "BaseBdev3", 00:09:40.166 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:40.166 "is_configured": true, 00:09:40.166 "data_offset": 2048, 00:09:40.166 "data_size": 63488 00:09:40.166 } 00:09:40.166 ] 00:09:40.166 }' 00:09:40.166 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.166 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.425 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.425 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.425 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.425 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f6c21525-0cf8-42c8-ace9-13562ba93c68 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.686 09:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.686 [2024-11-20 09:22:06.032670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:40.686 [2024-11-20 09:22:06.032983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.686 [2024-11-20 09:22:06.033003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.686 [2024-11-20 09:22:06.033331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:40.686 [2024-11-20 09:22:06.033530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.686 [2024-11-20 09:22:06.033542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:40.686 NewBaseBdev 00:09:40.686 [2024-11-20 09:22:06.033716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.686 [ 00:09:40.686 { 00:09:40.686 "name": "NewBaseBdev", 00:09:40.686 "aliases": [ 00:09:40.686 "f6c21525-0cf8-42c8-ace9-13562ba93c68" 00:09:40.686 ], 00:09:40.686 "product_name": "Malloc disk", 00:09:40.686 "block_size": 512, 00:09:40.686 "num_blocks": 65536, 00:09:40.686 "uuid": "f6c21525-0cf8-42c8-ace9-13562ba93c68", 00:09:40.686 "assigned_rate_limits": { 00:09:40.686 "rw_ios_per_sec": 0, 00:09:40.686 "rw_mbytes_per_sec": 0, 00:09:40.686 "r_mbytes_per_sec": 0, 00:09:40.686 "w_mbytes_per_sec": 0 00:09:40.686 }, 00:09:40.686 "claimed": true, 00:09:40.686 "claim_type": "exclusive_write", 00:09:40.686 "zoned": false, 00:09:40.686 "supported_io_types": { 00:09:40.686 "read": true, 00:09:40.686 "write": true, 00:09:40.686 "unmap": true, 00:09:40.686 "flush": true, 00:09:40.686 "reset": true, 00:09:40.686 "nvme_admin": false, 00:09:40.686 "nvme_io": false, 00:09:40.686 "nvme_io_md": false, 00:09:40.686 "write_zeroes": true, 00:09:40.686 "zcopy": true, 00:09:40.686 "get_zone_info": false, 00:09:40.686 "zone_management": false, 00:09:40.686 "zone_append": false, 00:09:40.686 "compare": false, 00:09:40.686 "compare_and_write": false, 00:09:40.686 "abort": true, 00:09:40.686 "seek_hole": false, 00:09:40.686 "seek_data": false, 00:09:40.686 "copy": true, 00:09:40.686 "nvme_iov_md": false 00:09:40.686 }, 00:09:40.686 "memory_domains": [ 00:09:40.686 { 00:09:40.686 "dma_device_id": "system", 00:09:40.686 "dma_device_type": 1 00:09:40.686 }, 00:09:40.686 { 00:09:40.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.686 "dma_device_type": 2 00:09:40.686 } 00:09:40.686 ], 00:09:40.686 "driver_specific": {} 00:09:40.686 } 00:09:40.686 ] 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.686 "name": "Existed_Raid", 00:09:40.686 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:40.686 "strip_size_kb": 64, 00:09:40.686 "state": "online", 00:09:40.686 "raid_level": "concat", 00:09:40.686 "superblock": true, 00:09:40.686 "num_base_bdevs": 3, 00:09:40.686 "num_base_bdevs_discovered": 3, 00:09:40.686 "num_base_bdevs_operational": 3, 00:09:40.686 "base_bdevs_list": [ 00:09:40.686 { 00:09:40.686 "name": "NewBaseBdev", 00:09:40.686 "uuid": "f6c21525-0cf8-42c8-ace9-13562ba93c68", 00:09:40.686 "is_configured": true, 00:09:40.686 "data_offset": 2048, 00:09:40.686 "data_size": 63488 00:09:40.686 }, 00:09:40.686 { 00:09:40.686 "name": "BaseBdev2", 00:09:40.686 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:40.686 "is_configured": true, 00:09:40.686 "data_offset": 2048, 00:09:40.686 "data_size": 63488 00:09:40.686 }, 00:09:40.686 { 00:09:40.686 "name": "BaseBdev3", 00:09:40.686 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:40.686 "is_configured": true, 00:09:40.686 "data_offset": 2048, 00:09:40.686 "data_size": 63488 00:09:40.686 } 00:09:40.686 ] 00:09:40.686 }' 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.686 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.257 [2024-11-20 09:22:06.468403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.257 "name": "Existed_Raid", 00:09:41.257 "aliases": [ 00:09:41.257 "904e913c-820e-4c54-98f2-e1d27d05d4d1" 00:09:41.257 ], 00:09:41.257 "product_name": "Raid Volume", 00:09:41.257 "block_size": 512, 00:09:41.257 "num_blocks": 190464, 00:09:41.257 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:41.257 "assigned_rate_limits": { 00:09:41.257 "rw_ios_per_sec": 0, 00:09:41.257 "rw_mbytes_per_sec": 0, 00:09:41.257 "r_mbytes_per_sec": 0, 00:09:41.257 "w_mbytes_per_sec": 0 00:09:41.257 }, 00:09:41.257 "claimed": false, 00:09:41.257 "zoned": false, 00:09:41.257 "supported_io_types": { 00:09:41.257 "read": true, 00:09:41.257 "write": true, 00:09:41.257 "unmap": true, 00:09:41.257 "flush": true, 00:09:41.257 "reset": true, 00:09:41.257 "nvme_admin": false, 00:09:41.257 "nvme_io": false, 00:09:41.257 "nvme_io_md": false, 00:09:41.257 "write_zeroes": true, 00:09:41.257 "zcopy": false, 00:09:41.257 "get_zone_info": false, 00:09:41.257 "zone_management": false, 00:09:41.257 "zone_append": false, 00:09:41.257 "compare": false, 00:09:41.257 "compare_and_write": false, 00:09:41.257 "abort": false, 00:09:41.257 "seek_hole": false, 00:09:41.257 "seek_data": false, 00:09:41.257 "copy": false, 00:09:41.257 "nvme_iov_md": false 00:09:41.257 }, 00:09:41.257 "memory_domains": [ 00:09:41.257 { 00:09:41.257 "dma_device_id": "system", 00:09:41.257 "dma_device_type": 1 00:09:41.257 }, 00:09:41.257 { 00:09:41.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.257 "dma_device_type": 2 00:09:41.257 }, 00:09:41.257 { 00:09:41.257 "dma_device_id": "system", 00:09:41.257 "dma_device_type": 1 00:09:41.257 }, 00:09:41.257 { 00:09:41.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.257 "dma_device_type": 2 00:09:41.257 }, 00:09:41.257 { 00:09:41.257 "dma_device_id": "system", 00:09:41.257 "dma_device_type": 1 00:09:41.257 }, 00:09:41.257 { 00:09:41.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.257 "dma_device_type": 2 00:09:41.257 } 00:09:41.257 ], 00:09:41.257 "driver_specific": { 00:09:41.257 "raid": { 00:09:41.257 "uuid": "904e913c-820e-4c54-98f2-e1d27d05d4d1", 00:09:41.257 "strip_size_kb": 64, 00:09:41.257 "state": "online", 00:09:41.257 "raid_level": "concat", 00:09:41.257 "superblock": true, 00:09:41.257 "num_base_bdevs": 3, 00:09:41.257 "num_base_bdevs_discovered": 3, 00:09:41.257 "num_base_bdevs_operational": 3, 00:09:41.257 "base_bdevs_list": [ 00:09:41.257 { 00:09:41.257 "name": "NewBaseBdev", 00:09:41.257 "uuid": "f6c21525-0cf8-42c8-ace9-13562ba93c68", 00:09:41.257 "is_configured": true, 00:09:41.257 "data_offset": 2048, 00:09:41.257 "data_size": 63488 00:09:41.257 }, 00:09:41.257 { 00:09:41.257 "name": "BaseBdev2", 00:09:41.257 "uuid": "d0513f57-a7f1-42b4-bf33-e05142f188a1", 00:09:41.257 "is_configured": true, 00:09:41.257 "data_offset": 2048, 00:09:41.257 "data_size": 63488 00:09:41.257 }, 00:09:41.257 { 00:09:41.257 "name": "BaseBdev3", 00:09:41.257 "uuid": "908fe935-b389-4d4c-b185-5ed564cfad0b", 00:09:41.257 "is_configured": true, 00:09:41.257 "data_offset": 2048, 00:09:41.257 "data_size": 63488 00:09:41.257 } 00:09:41.257 ] 00:09:41.257 } 00:09:41.257 } 00:09:41.257 }' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:41.257 BaseBdev2 00:09:41.257 BaseBdev3' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.257 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.517 [2024-11-20 09:22:06.751663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.517 [2024-11-20 09:22:06.751722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.517 [2024-11-20 09:22:06.751865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.517 [2024-11-20 09:22:06.751944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.517 [2024-11-20 09:22:06.751961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66515 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66515 ']' 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66515 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66515 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66515' 00:09:41.517 killing process with pid 66515 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66515 00:09:41.517 [2024-11-20 09:22:06.799703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.517 09:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66515 00:09:41.776 [2024-11-20 09:22:07.187137] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.171 09:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:43.171 00:09:43.171 real 0m11.619s 00:09:43.171 user 0m18.012s 00:09:43.171 sys 0m2.105s 00:09:43.171 09:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.171 ************************************ 00:09:43.171 END TEST raid_state_function_test_sb 00:09:43.171 ************************************ 00:09:43.171 09:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 09:22:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:43.431 09:22:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.431 09:22:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.431 09:22:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 ************************************ 00:09:43.431 START TEST raid_superblock_test 00:09:43.431 ************************************ 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67145 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67145 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67145 ']' 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:43.431 09:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 [2024-11-20 09:22:08.767380] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:43.431 [2024-11-20 09:22:08.767563] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67145 ] 00:09:43.691 [2024-11-20 09:22:08.959919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.691 [2024-11-20 09:22:09.128304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.264 [2024-11-20 09:22:09.419329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.264 [2024-11-20 09:22:09.419443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.524 malloc1 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.524 [2024-11-20 09:22:09.782900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.524 [2024-11-20 09:22:09.783127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.524 [2024-11-20 09:22:09.783197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:44.524 [2024-11-20 09:22:09.783243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.524 [2024-11-20 09:22:09.786401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.524 [2024-11-20 09:22:09.786548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.524 pt1 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.524 malloc2 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.524 [2024-11-20 09:22:09.862292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.524 [2024-11-20 09:22:09.862409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.524 [2024-11-20 09:22:09.862458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:44.524 [2024-11-20 09:22:09.862471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.524 [2024-11-20 09:22:09.865585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.524 [2024-11-20 09:22:09.865643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.524 pt2 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.524 malloc3 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.524 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.525 [2024-11-20 09:22:09.956455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.525 [2024-11-20 09:22:09.956684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.525 [2024-11-20 09:22:09.956737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:44.525 [2024-11-20 09:22:09.956774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.525 [2024-11-20 09:22:09.959813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.525 [2024-11-20 09:22:09.959958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.525 pt3 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.525 [2024-11-20 09:22:09.968851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:44.525 [2024-11-20 09:22:09.971509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.525 [2024-11-20 09:22:09.971604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.525 [2024-11-20 09:22:09.971819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:44.525 [2024-11-20 09:22:09.971851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:44.525 [2024-11-20 09:22:09.972238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:44.525 [2024-11-20 09:22:09.972497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:44.525 [2024-11-20 09:22:09.972512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:44.525 [2024-11-20 09:22:09.972829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.525 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.783 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.783 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.783 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.783 09:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.783 09:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.783 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.783 "name": "raid_bdev1", 00:09:44.783 "uuid": "ec2d2e51-7e37-43ce-9584-5a672a3d25ed", 00:09:44.783 "strip_size_kb": 64, 00:09:44.783 "state": "online", 00:09:44.783 "raid_level": "concat", 00:09:44.783 "superblock": true, 00:09:44.783 "num_base_bdevs": 3, 00:09:44.783 "num_base_bdevs_discovered": 3, 00:09:44.783 "num_base_bdevs_operational": 3, 00:09:44.783 "base_bdevs_list": [ 00:09:44.783 { 00:09:44.783 "name": "pt1", 00:09:44.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.783 "is_configured": true, 00:09:44.783 "data_offset": 2048, 00:09:44.783 "data_size": 63488 00:09:44.783 }, 00:09:44.783 { 00:09:44.783 "name": "pt2", 00:09:44.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.783 "is_configured": true, 00:09:44.783 "data_offset": 2048, 00:09:44.783 "data_size": 63488 00:09:44.783 }, 00:09:44.783 { 00:09:44.783 "name": "pt3", 00:09:44.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.783 "is_configured": true, 00:09:44.783 "data_offset": 2048, 00:09:44.783 "data_size": 63488 00:09:44.783 } 00:09:44.783 ] 00:09:44.783 }' 00:09:44.783 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.783 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.042 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.042 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.042 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.042 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.043 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.043 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.043 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.043 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.043 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.043 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.043 [2024-11-20 09:22:10.445102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.043 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.043 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.043 "name": "raid_bdev1", 00:09:45.043 "aliases": [ 00:09:45.043 "ec2d2e51-7e37-43ce-9584-5a672a3d25ed" 00:09:45.043 ], 00:09:45.043 "product_name": "Raid Volume", 00:09:45.043 "block_size": 512, 00:09:45.043 "num_blocks": 190464, 00:09:45.043 "uuid": "ec2d2e51-7e37-43ce-9584-5a672a3d25ed", 00:09:45.043 "assigned_rate_limits": { 00:09:45.043 "rw_ios_per_sec": 0, 00:09:45.043 "rw_mbytes_per_sec": 0, 00:09:45.043 "r_mbytes_per_sec": 0, 00:09:45.043 "w_mbytes_per_sec": 0 00:09:45.043 }, 00:09:45.043 "claimed": false, 00:09:45.043 "zoned": false, 00:09:45.043 "supported_io_types": { 00:09:45.043 "read": true, 00:09:45.043 "write": true, 00:09:45.043 "unmap": true, 00:09:45.043 "flush": true, 00:09:45.043 "reset": true, 00:09:45.043 "nvme_admin": false, 00:09:45.043 "nvme_io": false, 00:09:45.043 "nvme_io_md": false, 00:09:45.043 "write_zeroes": true, 00:09:45.043 "zcopy": false, 00:09:45.043 "get_zone_info": false, 00:09:45.043 "zone_management": false, 00:09:45.043 "zone_append": false, 00:09:45.043 "compare": false, 00:09:45.043 "compare_and_write": false, 00:09:45.043 "abort": false, 00:09:45.043 "seek_hole": false, 00:09:45.043 "seek_data": false, 00:09:45.043 "copy": false, 00:09:45.043 "nvme_iov_md": false 00:09:45.043 }, 00:09:45.043 "memory_domains": [ 00:09:45.043 { 00:09:45.043 "dma_device_id": "system", 00:09:45.043 "dma_device_type": 1 00:09:45.043 }, 00:09:45.043 { 00:09:45.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.043 "dma_device_type": 2 00:09:45.043 }, 00:09:45.043 { 00:09:45.043 "dma_device_id": "system", 00:09:45.043 "dma_device_type": 1 00:09:45.043 }, 00:09:45.043 { 00:09:45.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.043 "dma_device_type": 2 00:09:45.043 }, 00:09:45.043 { 00:09:45.043 "dma_device_id": "system", 00:09:45.043 "dma_device_type": 1 00:09:45.043 }, 00:09:45.043 { 00:09:45.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.043 "dma_device_type": 2 00:09:45.043 } 00:09:45.043 ], 00:09:45.043 "driver_specific": { 00:09:45.043 "raid": { 00:09:45.043 "uuid": "ec2d2e51-7e37-43ce-9584-5a672a3d25ed", 00:09:45.043 "strip_size_kb": 64, 00:09:45.043 "state": "online", 00:09:45.043 "raid_level": "concat", 00:09:45.043 "superblock": true, 00:09:45.043 "num_base_bdevs": 3, 00:09:45.043 "num_base_bdevs_discovered": 3, 00:09:45.043 "num_base_bdevs_operational": 3, 00:09:45.043 "base_bdevs_list": [ 00:09:45.043 { 00:09:45.043 "name": "pt1", 00:09:45.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.043 "is_configured": true, 00:09:45.043 "data_offset": 2048, 00:09:45.043 "data_size": 63488 00:09:45.043 }, 00:09:45.043 { 00:09:45.043 "name": "pt2", 00:09:45.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.043 "is_configured": true, 00:09:45.043 "data_offset": 2048, 00:09:45.043 "data_size": 63488 00:09:45.043 }, 00:09:45.043 { 00:09:45.043 "name": "pt3", 00:09:45.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.043 "is_configured": true, 00:09:45.043 "data_offset": 2048, 00:09:45.043 "data_size": 63488 00:09:45.043 } 00:09:45.043 ] 00:09:45.043 } 00:09:45.043 } 00:09:45.043 }' 00:09:45.043 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.301 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.301 pt2 00:09:45.301 pt3' 00:09:45.301 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.301 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.301 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.301 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.301 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.301 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.302 [2024-11-20 09:22:10.720791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.302 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ec2d2e51-7e37-43ce-9584-5a672a3d25ed 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ec2d2e51-7e37-43ce-9584-5a672a3d25ed ']' 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.561 [2024-11-20 09:22:10.768330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.561 [2024-11-20 09:22:10.768395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.561 [2024-11-20 09:22:10.768549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.561 [2024-11-20 09:22:10.768638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.561 [2024-11-20 09:22:10.768651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:45.561 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.562 [2024-11-20 09:22:10.920175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:45.562 [2024-11-20 09:22:10.922761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:45.562 [2024-11-20 09:22:10.922882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:45.562 [2024-11-20 09:22:10.922976] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:45.562 [2024-11-20 09:22:10.923081] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:45.562 [2024-11-20 09:22:10.923137] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:45.562 [2024-11-20 09:22:10.923214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.562 [2024-11-20 09:22:10.923247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:45.562 request: 00:09:45.562 { 00:09:45.562 "name": "raid_bdev1", 00:09:45.562 "raid_level": "concat", 00:09:45.562 "base_bdevs": [ 00:09:45.562 "malloc1", 00:09:45.562 "malloc2", 00:09:45.562 "malloc3" 00:09:45.562 ], 00:09:45.562 "strip_size_kb": 64, 00:09:45.562 "superblock": false, 00:09:45.562 "method": "bdev_raid_create", 00:09:45.562 "req_id": 1 00:09:45.562 } 00:09:45.562 Got JSON-RPC error response 00:09:45.562 response: 00:09:45.562 { 00:09:45.562 "code": -17, 00:09:45.562 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:45.562 } 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.562 [2024-11-20 09:22:10.992097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.562 [2024-11-20 09:22:10.992330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.562 [2024-11-20 09:22:10.992381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:45.562 [2024-11-20 09:22:10.992420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.562 [2024-11-20 09:22:10.995484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.562 [2024-11-20 09:22:10.995619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.562 [2024-11-20 09:22:10.995807] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:45.562 [2024-11-20 09:22:10.995935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.562 pt1 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.562 09:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.562 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.562 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.562 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.562 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.562 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.820 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.820 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.820 "name": "raid_bdev1", 00:09:45.820 "uuid": "ec2d2e51-7e37-43ce-9584-5a672a3d25ed", 00:09:45.820 "strip_size_kb": 64, 00:09:45.820 "state": "configuring", 00:09:45.820 "raid_level": "concat", 00:09:45.820 "superblock": true, 00:09:45.820 "num_base_bdevs": 3, 00:09:45.820 "num_base_bdevs_discovered": 1, 00:09:45.820 "num_base_bdevs_operational": 3, 00:09:45.820 "base_bdevs_list": [ 00:09:45.820 { 00:09:45.820 "name": "pt1", 00:09:45.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.820 "is_configured": true, 00:09:45.820 "data_offset": 2048, 00:09:45.820 "data_size": 63488 00:09:45.820 }, 00:09:45.820 { 00:09:45.820 "name": null, 00:09:45.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.820 "is_configured": false, 00:09:45.820 "data_offset": 2048, 00:09:45.820 "data_size": 63488 00:09:45.820 }, 00:09:45.820 { 00:09:45.820 "name": null, 00:09:45.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.820 "is_configured": false, 00:09:45.820 "data_offset": 2048, 00:09:45.820 "data_size": 63488 00:09:45.820 } 00:09:45.820 ] 00:09:45.820 }' 00:09:45.820 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.820 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.079 [2024-11-20 09:22:11.455562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.079 [2024-11-20 09:22:11.455788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.079 [2024-11-20 09:22:11.455833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:46.079 [2024-11-20 09:22:11.455846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.079 [2024-11-20 09:22:11.456492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.079 [2024-11-20 09:22:11.456522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.079 [2024-11-20 09:22:11.456644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.079 [2024-11-20 09:22:11.456673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.079 pt2 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.079 [2024-11-20 09:22:11.463517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.079 "name": "raid_bdev1", 00:09:46.079 "uuid": "ec2d2e51-7e37-43ce-9584-5a672a3d25ed", 00:09:46.079 "strip_size_kb": 64, 00:09:46.079 "state": "configuring", 00:09:46.079 "raid_level": "concat", 00:09:46.079 "superblock": true, 00:09:46.079 "num_base_bdevs": 3, 00:09:46.079 "num_base_bdevs_discovered": 1, 00:09:46.079 "num_base_bdevs_operational": 3, 00:09:46.079 "base_bdevs_list": [ 00:09:46.079 { 00:09:46.079 "name": "pt1", 00:09:46.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.079 "is_configured": true, 00:09:46.079 "data_offset": 2048, 00:09:46.079 "data_size": 63488 00:09:46.079 }, 00:09:46.079 { 00:09:46.079 "name": null, 00:09:46.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.079 "is_configured": false, 00:09:46.079 "data_offset": 0, 00:09:46.079 "data_size": 63488 00:09:46.079 }, 00:09:46.079 { 00:09:46.079 "name": null, 00:09:46.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.079 "is_configured": false, 00:09:46.079 "data_offset": 2048, 00:09:46.079 "data_size": 63488 00:09:46.079 } 00:09:46.079 ] 00:09:46.079 }' 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.079 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.716 [2024-11-20 09:22:11.922721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.716 [2024-11-20 09:22:11.922846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.716 [2024-11-20 09:22:11.922871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:46.716 [2024-11-20 09:22:11.922885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.716 [2024-11-20 09:22:11.923565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.716 [2024-11-20 09:22:11.923594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.716 [2024-11-20 09:22:11.923710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.716 [2024-11-20 09:22:11.923743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.716 pt2 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.716 [2024-11-20 09:22:11.934677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:46.716 [2024-11-20 09:22:11.934760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.716 [2024-11-20 09:22:11.934782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:46.716 [2024-11-20 09:22:11.934794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.716 [2024-11-20 09:22:11.935330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.716 [2024-11-20 09:22:11.935366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:46.716 [2024-11-20 09:22:11.935487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:46.716 [2024-11-20 09:22:11.935519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:46.716 [2024-11-20 09:22:11.935674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.716 [2024-11-20 09:22:11.935693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:46.716 [2024-11-20 09:22:11.936038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:46.716 [2024-11-20 09:22:11.936228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.716 [2024-11-20 09:22:11.936238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:46.716 [2024-11-20 09:22:11.936394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.716 pt3 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.716 "name": "raid_bdev1", 00:09:46.716 "uuid": "ec2d2e51-7e37-43ce-9584-5a672a3d25ed", 00:09:46.716 "strip_size_kb": 64, 00:09:46.716 "state": "online", 00:09:46.716 "raid_level": "concat", 00:09:46.716 "superblock": true, 00:09:46.716 "num_base_bdevs": 3, 00:09:46.716 "num_base_bdevs_discovered": 3, 00:09:46.716 "num_base_bdevs_operational": 3, 00:09:46.716 "base_bdevs_list": [ 00:09:46.716 { 00:09:46.716 "name": "pt1", 00:09:46.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.716 "is_configured": true, 00:09:46.716 "data_offset": 2048, 00:09:46.716 "data_size": 63488 00:09:46.716 }, 00:09:46.716 { 00:09:46.716 "name": "pt2", 00:09:46.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.716 "is_configured": true, 00:09:46.716 "data_offset": 2048, 00:09:46.716 "data_size": 63488 00:09:46.716 }, 00:09:46.716 { 00:09:46.716 "name": "pt3", 00:09:46.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.716 "is_configured": true, 00:09:46.716 "data_offset": 2048, 00:09:46.716 "data_size": 63488 00:09:46.716 } 00:09:46.716 ] 00:09:46.716 }' 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.716 09:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.976 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.976 [2024-11-20 09:22:12.418283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.235 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.235 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.235 "name": "raid_bdev1", 00:09:47.235 "aliases": [ 00:09:47.235 "ec2d2e51-7e37-43ce-9584-5a672a3d25ed" 00:09:47.235 ], 00:09:47.235 "product_name": "Raid Volume", 00:09:47.235 "block_size": 512, 00:09:47.235 "num_blocks": 190464, 00:09:47.235 "uuid": "ec2d2e51-7e37-43ce-9584-5a672a3d25ed", 00:09:47.235 "assigned_rate_limits": { 00:09:47.235 "rw_ios_per_sec": 0, 00:09:47.235 "rw_mbytes_per_sec": 0, 00:09:47.235 "r_mbytes_per_sec": 0, 00:09:47.235 "w_mbytes_per_sec": 0 00:09:47.235 }, 00:09:47.235 "claimed": false, 00:09:47.235 "zoned": false, 00:09:47.235 "supported_io_types": { 00:09:47.235 "read": true, 00:09:47.235 "write": true, 00:09:47.235 "unmap": true, 00:09:47.235 "flush": true, 00:09:47.235 "reset": true, 00:09:47.235 "nvme_admin": false, 00:09:47.235 "nvme_io": false, 00:09:47.235 "nvme_io_md": false, 00:09:47.235 "write_zeroes": true, 00:09:47.235 "zcopy": false, 00:09:47.235 "get_zone_info": false, 00:09:47.235 "zone_management": false, 00:09:47.235 "zone_append": false, 00:09:47.235 "compare": false, 00:09:47.235 "compare_and_write": false, 00:09:47.235 "abort": false, 00:09:47.235 "seek_hole": false, 00:09:47.235 "seek_data": false, 00:09:47.235 "copy": false, 00:09:47.235 "nvme_iov_md": false 00:09:47.235 }, 00:09:47.235 "memory_domains": [ 00:09:47.235 { 00:09:47.235 "dma_device_id": "system", 00:09:47.235 "dma_device_type": 1 00:09:47.235 }, 00:09:47.235 { 00:09:47.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.235 "dma_device_type": 2 00:09:47.235 }, 00:09:47.235 { 00:09:47.235 "dma_device_id": "system", 00:09:47.235 "dma_device_type": 1 00:09:47.235 }, 00:09:47.235 { 00:09:47.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.235 "dma_device_type": 2 00:09:47.235 }, 00:09:47.235 { 00:09:47.235 "dma_device_id": "system", 00:09:47.235 "dma_device_type": 1 00:09:47.235 }, 00:09:47.235 { 00:09:47.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.235 "dma_device_type": 2 00:09:47.235 } 00:09:47.235 ], 00:09:47.235 "driver_specific": { 00:09:47.235 "raid": { 00:09:47.235 "uuid": "ec2d2e51-7e37-43ce-9584-5a672a3d25ed", 00:09:47.235 "strip_size_kb": 64, 00:09:47.235 "state": "online", 00:09:47.235 "raid_level": "concat", 00:09:47.235 "superblock": true, 00:09:47.235 "num_base_bdevs": 3, 00:09:47.235 "num_base_bdevs_discovered": 3, 00:09:47.235 "num_base_bdevs_operational": 3, 00:09:47.235 "base_bdevs_list": [ 00:09:47.235 { 00:09:47.235 "name": "pt1", 00:09:47.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.235 "is_configured": true, 00:09:47.236 "data_offset": 2048, 00:09:47.236 "data_size": 63488 00:09:47.236 }, 00:09:47.236 { 00:09:47.236 "name": "pt2", 00:09:47.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.236 "is_configured": true, 00:09:47.236 "data_offset": 2048, 00:09:47.236 "data_size": 63488 00:09:47.236 }, 00:09:47.236 { 00:09:47.236 "name": "pt3", 00:09:47.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.236 "is_configured": true, 00:09:47.236 "data_offset": 2048, 00:09:47.236 "data_size": 63488 00:09:47.236 } 00:09:47.236 ] 00:09:47.236 } 00:09:47.236 } 00:09:47.236 }' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.236 pt2 00:09:47.236 pt3' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.236 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:47.495 [2024-11-20 09:22:12.698033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ec2d2e51-7e37-43ce-9584-5a672a3d25ed '!=' ec2d2e51-7e37-43ce-9584-5a672a3d25ed ']' 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67145 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67145 ']' 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67145 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67145 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67145' 00:09:47.495 killing process with pid 67145 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67145 00:09:47.495 09:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67145 00:09:47.495 [2024-11-20 09:22:12.782641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.495 [2024-11-20 09:22:12.782809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.495 [2024-11-20 09:22:12.782917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.495 [2024-11-20 09:22:12.782933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:47.753 [2024-11-20 09:22:13.186423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.652 ************************************ 00:09:49.652 END TEST raid_superblock_test 00:09:49.652 09:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:49.652 00:09:49.652 real 0m5.971s 00:09:49.652 user 0m8.300s 00:09:49.652 sys 0m1.039s 00:09:49.652 09:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.652 09:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.652 ************************************ 00:09:49.652 09:22:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:49.652 09:22:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.652 09:22:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.652 09:22:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.652 ************************************ 00:09:49.652 START TEST raid_read_error_test 00:09:49.652 ************************************ 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:49.652 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.W97tNCPY0w 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67405 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67405 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67405 ']' 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:49.653 09:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.653 [2024-11-20 09:22:14.812944] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:49.653 [2024-11-20 09:22:14.813088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67405 ] 00:09:49.653 [2024-11-20 09:22:14.981590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.911 [2024-11-20 09:22:15.120068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.911 [2024-11-20 09:22:15.356440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.911 [2024-11-20 09:22:15.356657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.481 BaseBdev1_malloc 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.481 true 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.481 [2024-11-20 09:22:15.813795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:50.481 [2024-11-20 09:22:15.813867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.481 [2024-11-20 09:22:15.813894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:50.481 [2024-11-20 09:22:15.813908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.481 [2024-11-20 09:22:15.816530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.481 [2024-11-20 09:22:15.816652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:50.481 BaseBdev1 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.481 BaseBdev2_malloc 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.481 true 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.481 [2024-11-20 09:22:15.875951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:50.481 [2024-11-20 09:22:15.876105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.481 [2024-11-20 09:22:15.876135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:50.481 [2024-11-20 09:22:15.876150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.481 [2024-11-20 09:22:15.878884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.481 [2024-11-20 09:22:15.878935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:50.481 BaseBdev2 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.481 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.741 BaseBdev3_malloc 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.741 true 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.741 [2024-11-20 09:22:15.946472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:50.741 [2024-11-20 09:22:15.946538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.741 [2024-11-20 09:22:15.946561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:50.741 [2024-11-20 09:22:15.946573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.741 [2024-11-20 09:22:15.949119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.741 [2024-11-20 09:22:15.949244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:50.741 BaseBdev3 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.741 [2024-11-20 09:22:15.954547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.741 [2024-11-20 09:22:15.956792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.741 [2024-11-20 09:22:15.956894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.741 [2024-11-20 09:22:15.957138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:50.741 [2024-11-20 09:22:15.957152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:50.741 [2024-11-20 09:22:15.957491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:50.741 [2024-11-20 09:22:15.957687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:50.741 [2024-11-20 09:22:15.957702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:50.741 [2024-11-20 09:22:15.957896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.741 "name": "raid_bdev1", 00:09:50.741 "uuid": "9bb0bc3b-f6bc-4040-9a8c-0751b02ee937", 00:09:50.741 "strip_size_kb": 64, 00:09:50.741 "state": "online", 00:09:50.741 "raid_level": "concat", 00:09:50.741 "superblock": true, 00:09:50.741 "num_base_bdevs": 3, 00:09:50.741 "num_base_bdevs_discovered": 3, 00:09:50.741 "num_base_bdevs_operational": 3, 00:09:50.741 "base_bdevs_list": [ 00:09:50.741 { 00:09:50.741 "name": "BaseBdev1", 00:09:50.741 "uuid": "63b17164-3ad6-554b-988c-08a9baf83641", 00:09:50.741 "is_configured": true, 00:09:50.741 "data_offset": 2048, 00:09:50.741 "data_size": 63488 00:09:50.741 }, 00:09:50.741 { 00:09:50.741 "name": "BaseBdev2", 00:09:50.741 "uuid": "ecb2d958-e957-5dcb-9584-84816f855e8f", 00:09:50.741 "is_configured": true, 00:09:50.741 "data_offset": 2048, 00:09:50.741 "data_size": 63488 00:09:50.741 }, 00:09:50.741 { 00:09:50.741 "name": "BaseBdev3", 00:09:50.741 "uuid": "fd613241-5fb7-52ff-87c4-9b51faddfffe", 00:09:50.741 "is_configured": true, 00:09:50.741 "data_offset": 2048, 00:09:50.741 "data_size": 63488 00:09:50.741 } 00:09:50.741 ] 00:09:50.741 }' 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.741 09:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.000 09:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:51.000 09:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:51.259 [2024-11-20 09:22:16.574970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.193 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.193 "name": "raid_bdev1", 00:09:52.193 "uuid": "9bb0bc3b-f6bc-4040-9a8c-0751b02ee937", 00:09:52.193 "strip_size_kb": 64, 00:09:52.193 "state": "online", 00:09:52.193 "raid_level": "concat", 00:09:52.193 "superblock": true, 00:09:52.193 "num_base_bdevs": 3, 00:09:52.193 "num_base_bdevs_discovered": 3, 00:09:52.193 "num_base_bdevs_operational": 3, 00:09:52.193 "base_bdevs_list": [ 00:09:52.193 { 00:09:52.193 "name": "BaseBdev1", 00:09:52.194 "uuid": "63b17164-3ad6-554b-988c-08a9baf83641", 00:09:52.194 "is_configured": true, 00:09:52.194 "data_offset": 2048, 00:09:52.194 "data_size": 63488 00:09:52.194 }, 00:09:52.194 { 00:09:52.194 "name": "BaseBdev2", 00:09:52.194 "uuid": "ecb2d958-e957-5dcb-9584-84816f855e8f", 00:09:52.194 "is_configured": true, 00:09:52.194 "data_offset": 2048, 00:09:52.194 "data_size": 63488 00:09:52.194 }, 00:09:52.194 { 00:09:52.194 "name": "BaseBdev3", 00:09:52.194 "uuid": "fd613241-5fb7-52ff-87c4-9b51faddfffe", 00:09:52.194 "is_configured": true, 00:09:52.194 "data_offset": 2048, 00:09:52.194 "data_size": 63488 00:09:52.194 } 00:09:52.194 ] 00:09:52.194 }' 00:09:52.194 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.194 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.453 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.453 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.453 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 [2024-11-20 09:22:17.911740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.712 [2024-11-20 09:22:17.911782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.712 [2024-11-20 09:22:17.915042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.712 [2024-11-20 09:22:17.915098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.712 [2024-11-20 09:22:17.915141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.712 [2024-11-20 09:22:17.915156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:52.712 { 00:09:52.712 "results": [ 00:09:52.712 { 00:09:52.712 "job": "raid_bdev1", 00:09:52.712 "core_mask": "0x1", 00:09:52.712 "workload": "randrw", 00:09:52.712 "percentage": 50, 00:09:52.712 "status": "finished", 00:09:52.712 "queue_depth": 1, 00:09:52.712 "io_size": 131072, 00:09:52.712 "runtime": 1.337213, 00:09:52.712 "iops": 13276.119810381742, 00:09:52.712 "mibps": 1659.5149762977178, 00:09:52.712 "io_failed": 1, 00:09:52.712 "io_timeout": 0, 00:09:52.712 "avg_latency_us": 104.643584199981, 00:09:52.712 "min_latency_us": 30.183406113537117, 00:09:52.712 "max_latency_us": 1652.709170305677 00:09:52.712 } 00:09:52.712 ], 00:09:52.712 "core_count": 1 00:09:52.712 } 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67405 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67405 ']' 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67405 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67405 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67405' 00:09:52.712 killing process with pid 67405 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67405 00:09:52.712 09:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67405 00:09:52.712 [2024-11-20 09:22:17.950623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.971 [2024-11-20 09:22:18.225142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.W97tNCPY0w 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:54.353 00:09:54.353 real 0m4.878s 00:09:54.353 user 0m5.913s 00:09:54.353 sys 0m0.532s 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.353 09:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.353 ************************************ 00:09:54.353 END TEST raid_read_error_test 00:09:54.353 ************************************ 00:09:54.353 09:22:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:54.353 09:22:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:54.353 09:22:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.353 09:22:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.353 ************************************ 00:09:54.353 START TEST raid_write_error_test 00:09:54.353 ************************************ 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IurLNEfwrX 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67556 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67556 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67556 ']' 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.353 09:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.353 [2024-11-20 09:22:19.739849] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:54.353 [2024-11-20 09:22:19.740075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67556 ] 00:09:54.611 [2024-11-20 09:22:19.918337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.611 [2024-11-20 09:22:20.055059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.869 [2024-11-20 09:22:20.284100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.869 [2024-11-20 09:22:20.284217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 BaseBdev1_malloc 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 true 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 [2024-11-20 09:22:20.702094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:55.435 [2024-11-20 09:22:20.702245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.435 [2024-11-20 09:22:20.702278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:55.435 [2024-11-20 09:22:20.702291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.435 [2024-11-20 09:22:20.704920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.435 [2024-11-20 09:22:20.704975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:55.435 BaseBdev1 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 BaseBdev2_malloc 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 true 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 [2024-11-20 09:22:20.775264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:55.435 [2024-11-20 09:22:20.775341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.435 [2024-11-20 09:22:20.775364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.435 [2024-11-20 09:22:20.775377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.435 [2024-11-20 09:22:20.778017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.435 [2024-11-20 09:22:20.778067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.435 BaseBdev2 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 BaseBdev3_malloc 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 true 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 [2024-11-20 09:22:20.857269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:55.435 [2024-11-20 09:22:20.857346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.435 [2024-11-20 09:22:20.857371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:55.435 [2024-11-20 09:22:20.857384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.435 [2024-11-20 09:22:20.860028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.435 [2024-11-20 09:22:20.860081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:55.435 BaseBdev3 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 [2024-11-20 09:22:20.869350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.435 [2024-11-20 09:22:20.871535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.435 [2024-11-20 09:22:20.871629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.435 [2024-11-20 09:22:20.871864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:55.435 [2024-11-20 09:22:20.871878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:55.435 [2024-11-20 09:22:20.872199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:55.435 [2024-11-20 09:22:20.872374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:55.435 [2024-11-20 09:22:20.872389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:55.435 [2024-11-20 09:22:20.872622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.435 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.693 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.693 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.693 "name": "raid_bdev1", 00:09:55.694 "uuid": "a4cfa0fd-4ed1-461d-aa2c-15b64ded9d28", 00:09:55.694 "strip_size_kb": 64, 00:09:55.694 "state": "online", 00:09:55.694 "raid_level": "concat", 00:09:55.694 "superblock": true, 00:09:55.694 "num_base_bdevs": 3, 00:09:55.694 "num_base_bdevs_discovered": 3, 00:09:55.694 "num_base_bdevs_operational": 3, 00:09:55.694 "base_bdevs_list": [ 00:09:55.694 { 00:09:55.694 "name": "BaseBdev1", 00:09:55.694 "uuid": "77611f38-b678-58dc-86b0-e40b439035aa", 00:09:55.694 "is_configured": true, 00:09:55.694 "data_offset": 2048, 00:09:55.694 "data_size": 63488 00:09:55.694 }, 00:09:55.694 { 00:09:55.694 "name": "BaseBdev2", 00:09:55.694 "uuid": "663a52cd-e919-5b7e-b0ed-97ebff14fad2", 00:09:55.694 "is_configured": true, 00:09:55.694 "data_offset": 2048, 00:09:55.694 "data_size": 63488 00:09:55.694 }, 00:09:55.694 { 00:09:55.694 "name": "BaseBdev3", 00:09:55.694 "uuid": "97fcb1d9-2a8f-5316-9aff-1e727de936e5", 00:09:55.694 "is_configured": true, 00:09:55.694 "data_offset": 2048, 00:09:55.694 "data_size": 63488 00:09:55.694 } 00:09:55.694 ] 00:09:55.694 }' 00:09:55.694 09:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.694 09:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.950 09:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:55.950 09:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:56.206 [2024-11-20 09:22:21.470029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.137 "name": "raid_bdev1", 00:09:57.137 "uuid": "a4cfa0fd-4ed1-461d-aa2c-15b64ded9d28", 00:09:57.137 "strip_size_kb": 64, 00:09:57.137 "state": "online", 00:09:57.137 "raid_level": "concat", 00:09:57.137 "superblock": true, 00:09:57.137 "num_base_bdevs": 3, 00:09:57.137 "num_base_bdevs_discovered": 3, 00:09:57.137 "num_base_bdevs_operational": 3, 00:09:57.137 "base_bdevs_list": [ 00:09:57.137 { 00:09:57.137 "name": "BaseBdev1", 00:09:57.137 "uuid": "77611f38-b678-58dc-86b0-e40b439035aa", 00:09:57.137 "is_configured": true, 00:09:57.137 "data_offset": 2048, 00:09:57.137 "data_size": 63488 00:09:57.137 }, 00:09:57.137 { 00:09:57.137 "name": "BaseBdev2", 00:09:57.137 "uuid": "663a52cd-e919-5b7e-b0ed-97ebff14fad2", 00:09:57.137 "is_configured": true, 00:09:57.137 "data_offset": 2048, 00:09:57.137 "data_size": 63488 00:09:57.137 }, 00:09:57.137 { 00:09:57.137 "name": "BaseBdev3", 00:09:57.137 "uuid": "97fcb1d9-2a8f-5316-9aff-1e727de936e5", 00:09:57.137 "is_configured": true, 00:09:57.137 "data_offset": 2048, 00:09:57.137 "data_size": 63488 00:09:57.137 } 00:09:57.137 ] 00:09:57.137 }' 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.137 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.702 [2024-11-20 09:22:22.884141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.702 [2024-11-20 09:22:22.884270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.702 [2024-11-20 09:22:22.887514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.702 [2024-11-20 09:22:22.887636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.702 [2024-11-20 09:22:22.887704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.702 [2024-11-20 09:22:22.887762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:57.702 { 00:09:57.702 "results": [ 00:09:57.702 { 00:09:57.702 "job": "raid_bdev1", 00:09:57.702 "core_mask": "0x1", 00:09:57.702 "workload": "randrw", 00:09:57.702 "percentage": 50, 00:09:57.702 "status": "finished", 00:09:57.702 "queue_depth": 1, 00:09:57.702 "io_size": 131072, 00:09:57.702 "runtime": 1.414672, 00:09:57.702 "iops": 13477.329020437246, 00:09:57.702 "mibps": 1684.6661275546558, 00:09:57.702 "io_failed": 1, 00:09:57.702 "io_timeout": 0, 00:09:57.702 "avg_latency_us": 102.99423274809148, 00:09:57.702 "min_latency_us": 28.05938864628821, 00:09:57.702 "max_latency_us": 1731.4096069868995 00:09:57.702 } 00:09:57.702 ], 00:09:57.702 "core_count": 1 00:09:57.702 } 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67556 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67556 ']' 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67556 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67556 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.702 killing process with pid 67556 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67556' 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67556 00:09:57.702 [2024-11-20 09:22:22.928613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.702 09:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67556 00:09:57.965 [2024-11-20 09:22:23.185383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IurLNEfwrX 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.355 ************************************ 00:09:59.355 END TEST raid_write_error_test 00:09:59.355 ************************************ 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:59.355 00:09:59.355 real 0m4.896s 00:09:59.355 user 0m5.875s 00:09:59.355 sys 0m0.621s 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.355 09:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.355 09:22:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:59.355 09:22:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:59.355 09:22:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:59.355 09:22:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.355 09:22:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.355 ************************************ 00:09:59.355 START TEST raid_state_function_test 00:09:59.355 ************************************ 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67700 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67700' 00:09:59.355 Process raid pid: 67700 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67700 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67700 ']' 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.355 09:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.355 [2024-11-20 09:22:24.700089] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:09:59.355 [2024-11-20 09:22:24.700333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.613 [2024-11-20 09:22:24.859036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.613 [2024-11-20 09:22:24.987773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.871 [2024-11-20 09:22:25.216956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.871 [2024-11-20 09:22:25.217117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.130 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.130 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:00.130 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.130 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.130 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 [2024-11-20 09:22:25.582187] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.130 [2024-11-20 09:22:25.582324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.130 [2024-11-20 09:22:25.582362] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.130 [2024-11-20 09:22:25.582390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.130 [2024-11-20 09:22:25.582412] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.130 [2024-11-20 09:22:25.582450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.388 "name": "Existed_Raid", 00:10:00.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.388 "strip_size_kb": 0, 00:10:00.388 "state": "configuring", 00:10:00.388 "raid_level": "raid1", 00:10:00.388 "superblock": false, 00:10:00.388 "num_base_bdevs": 3, 00:10:00.388 "num_base_bdevs_discovered": 0, 00:10:00.388 "num_base_bdevs_operational": 3, 00:10:00.388 "base_bdevs_list": [ 00:10:00.388 { 00:10:00.388 "name": "BaseBdev1", 00:10:00.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.388 "is_configured": false, 00:10:00.388 "data_offset": 0, 00:10:00.388 "data_size": 0 00:10:00.388 }, 00:10:00.388 { 00:10:00.388 "name": "BaseBdev2", 00:10:00.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.388 "is_configured": false, 00:10:00.388 "data_offset": 0, 00:10:00.388 "data_size": 0 00:10:00.388 }, 00:10:00.388 { 00:10:00.388 "name": "BaseBdev3", 00:10:00.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.388 "is_configured": false, 00:10:00.388 "data_offset": 0, 00:10:00.388 "data_size": 0 00:10:00.388 } 00:10:00.388 ] 00:10:00.388 }' 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.388 09:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.647 [2024-11-20 09:22:26.057367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.647 [2024-11-20 09:22:26.057527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.647 [2024-11-20 09:22:26.069332] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.647 [2024-11-20 09:22:26.069454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.647 [2024-11-20 09:22:26.069470] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.647 [2024-11-20 09:22:26.069482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.647 [2024-11-20 09:22:26.069489] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.647 [2024-11-20 09:22:26.069499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.647 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.905 [2024-11-20 09:22:26.119407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.905 BaseBdev1 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.905 [ 00:10:00.905 { 00:10:00.905 "name": "BaseBdev1", 00:10:00.905 "aliases": [ 00:10:00.905 "68bca5a0-944a-4afa-8805-9a18ddbc7635" 00:10:00.905 ], 00:10:00.905 "product_name": "Malloc disk", 00:10:00.905 "block_size": 512, 00:10:00.905 "num_blocks": 65536, 00:10:00.905 "uuid": "68bca5a0-944a-4afa-8805-9a18ddbc7635", 00:10:00.905 "assigned_rate_limits": { 00:10:00.905 "rw_ios_per_sec": 0, 00:10:00.905 "rw_mbytes_per_sec": 0, 00:10:00.905 "r_mbytes_per_sec": 0, 00:10:00.905 "w_mbytes_per_sec": 0 00:10:00.905 }, 00:10:00.905 "claimed": true, 00:10:00.905 "claim_type": "exclusive_write", 00:10:00.905 "zoned": false, 00:10:00.905 "supported_io_types": { 00:10:00.905 "read": true, 00:10:00.905 "write": true, 00:10:00.905 "unmap": true, 00:10:00.905 "flush": true, 00:10:00.905 "reset": true, 00:10:00.905 "nvme_admin": false, 00:10:00.905 "nvme_io": false, 00:10:00.905 "nvme_io_md": false, 00:10:00.905 "write_zeroes": true, 00:10:00.905 "zcopy": true, 00:10:00.905 "get_zone_info": false, 00:10:00.905 "zone_management": false, 00:10:00.905 "zone_append": false, 00:10:00.905 "compare": false, 00:10:00.905 "compare_and_write": false, 00:10:00.905 "abort": true, 00:10:00.905 "seek_hole": false, 00:10:00.905 "seek_data": false, 00:10:00.905 "copy": true, 00:10:00.905 "nvme_iov_md": false 00:10:00.905 }, 00:10:00.905 "memory_domains": [ 00:10:00.905 { 00:10:00.905 "dma_device_id": "system", 00:10:00.905 "dma_device_type": 1 00:10:00.905 }, 00:10:00.905 { 00:10:00.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.905 "dma_device_type": 2 00:10:00.905 } 00:10:00.905 ], 00:10:00.905 "driver_specific": {} 00:10:00.905 } 00:10:00.905 ] 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.905 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.906 "name": "Existed_Raid", 00:10:00.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.906 "strip_size_kb": 0, 00:10:00.906 "state": "configuring", 00:10:00.906 "raid_level": "raid1", 00:10:00.906 "superblock": false, 00:10:00.906 "num_base_bdevs": 3, 00:10:00.906 "num_base_bdevs_discovered": 1, 00:10:00.906 "num_base_bdevs_operational": 3, 00:10:00.906 "base_bdevs_list": [ 00:10:00.906 { 00:10:00.906 "name": "BaseBdev1", 00:10:00.906 "uuid": "68bca5a0-944a-4afa-8805-9a18ddbc7635", 00:10:00.906 "is_configured": true, 00:10:00.906 "data_offset": 0, 00:10:00.906 "data_size": 65536 00:10:00.906 }, 00:10:00.906 { 00:10:00.906 "name": "BaseBdev2", 00:10:00.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.906 "is_configured": false, 00:10:00.906 "data_offset": 0, 00:10:00.906 "data_size": 0 00:10:00.906 }, 00:10:00.906 { 00:10:00.906 "name": "BaseBdev3", 00:10:00.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.906 "is_configured": false, 00:10:00.906 "data_offset": 0, 00:10:00.906 "data_size": 0 00:10:00.906 } 00:10:00.906 ] 00:10:00.906 }' 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.906 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.163 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.163 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.163 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.163 [2024-11-20 09:22:26.598685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.164 [2024-11-20 09:22:26.598825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:01.164 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.164 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.164 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.164 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.164 [2024-11-20 09:22:26.610733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.164 [2024-11-20 09:22:26.612826] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.164 [2024-11-20 09:22:26.612958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.164 [2024-11-20 09:22:26.612978] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.164 [2024-11-20 09:22:26.612991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.421 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.421 "name": "Existed_Raid", 00:10:01.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.421 "strip_size_kb": 0, 00:10:01.421 "state": "configuring", 00:10:01.421 "raid_level": "raid1", 00:10:01.421 "superblock": false, 00:10:01.421 "num_base_bdevs": 3, 00:10:01.421 "num_base_bdevs_discovered": 1, 00:10:01.421 "num_base_bdevs_operational": 3, 00:10:01.421 "base_bdevs_list": [ 00:10:01.421 { 00:10:01.421 "name": "BaseBdev1", 00:10:01.421 "uuid": "68bca5a0-944a-4afa-8805-9a18ddbc7635", 00:10:01.421 "is_configured": true, 00:10:01.421 "data_offset": 0, 00:10:01.421 "data_size": 65536 00:10:01.421 }, 00:10:01.421 { 00:10:01.421 "name": "BaseBdev2", 00:10:01.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.421 "is_configured": false, 00:10:01.421 "data_offset": 0, 00:10:01.421 "data_size": 0 00:10:01.421 }, 00:10:01.421 { 00:10:01.421 "name": "BaseBdev3", 00:10:01.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.422 "is_configured": false, 00:10:01.422 "data_offset": 0, 00:10:01.422 "data_size": 0 00:10:01.422 } 00:10:01.422 ] 00:10:01.422 }' 00:10:01.422 09:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.422 09:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.700 [2024-11-20 09:22:27.136710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.700 BaseBdev2 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.700 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.960 [ 00:10:01.960 { 00:10:01.960 "name": "BaseBdev2", 00:10:01.960 "aliases": [ 00:10:01.960 "8cef7aaa-c947-415d-8625-47ec598df59a" 00:10:01.960 ], 00:10:01.960 "product_name": "Malloc disk", 00:10:01.960 "block_size": 512, 00:10:01.960 "num_blocks": 65536, 00:10:01.960 "uuid": "8cef7aaa-c947-415d-8625-47ec598df59a", 00:10:01.960 "assigned_rate_limits": { 00:10:01.960 "rw_ios_per_sec": 0, 00:10:01.960 "rw_mbytes_per_sec": 0, 00:10:01.960 "r_mbytes_per_sec": 0, 00:10:01.960 "w_mbytes_per_sec": 0 00:10:01.960 }, 00:10:01.960 "claimed": true, 00:10:01.960 "claim_type": "exclusive_write", 00:10:01.960 "zoned": false, 00:10:01.960 "supported_io_types": { 00:10:01.960 "read": true, 00:10:01.960 "write": true, 00:10:01.960 "unmap": true, 00:10:01.960 "flush": true, 00:10:01.960 "reset": true, 00:10:01.960 "nvme_admin": false, 00:10:01.960 "nvme_io": false, 00:10:01.960 "nvme_io_md": false, 00:10:01.960 "write_zeroes": true, 00:10:01.960 "zcopy": true, 00:10:01.960 "get_zone_info": false, 00:10:01.960 "zone_management": false, 00:10:01.960 "zone_append": false, 00:10:01.960 "compare": false, 00:10:01.960 "compare_and_write": false, 00:10:01.960 "abort": true, 00:10:01.960 "seek_hole": false, 00:10:01.960 "seek_data": false, 00:10:01.960 "copy": true, 00:10:01.960 "nvme_iov_md": false 00:10:01.960 }, 00:10:01.960 "memory_domains": [ 00:10:01.960 { 00:10:01.960 "dma_device_id": "system", 00:10:01.960 "dma_device_type": 1 00:10:01.960 }, 00:10:01.960 { 00:10:01.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.960 "dma_device_type": 2 00:10:01.960 } 00:10:01.960 ], 00:10:01.960 "driver_specific": {} 00:10:01.960 } 00:10:01.960 ] 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.960 "name": "Existed_Raid", 00:10:01.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.960 "strip_size_kb": 0, 00:10:01.960 "state": "configuring", 00:10:01.960 "raid_level": "raid1", 00:10:01.960 "superblock": false, 00:10:01.960 "num_base_bdevs": 3, 00:10:01.960 "num_base_bdevs_discovered": 2, 00:10:01.960 "num_base_bdevs_operational": 3, 00:10:01.960 "base_bdevs_list": [ 00:10:01.960 { 00:10:01.960 "name": "BaseBdev1", 00:10:01.960 "uuid": "68bca5a0-944a-4afa-8805-9a18ddbc7635", 00:10:01.960 "is_configured": true, 00:10:01.960 "data_offset": 0, 00:10:01.960 "data_size": 65536 00:10:01.960 }, 00:10:01.960 { 00:10:01.960 "name": "BaseBdev2", 00:10:01.960 "uuid": "8cef7aaa-c947-415d-8625-47ec598df59a", 00:10:01.960 "is_configured": true, 00:10:01.960 "data_offset": 0, 00:10:01.960 "data_size": 65536 00:10:01.960 }, 00:10:01.960 { 00:10:01.960 "name": "BaseBdev3", 00:10:01.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.960 "is_configured": false, 00:10:01.960 "data_offset": 0, 00:10:01.960 "data_size": 0 00:10:01.960 } 00:10:01.960 ] 00:10:01.960 }' 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.960 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.525 [2024-11-20 09:22:27.758075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.525 [2024-11-20 09:22:27.758233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:02.525 [2024-11-20 09:22:27.758269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:02.525 [2024-11-20 09:22:27.758661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:02.525 [2024-11-20 09:22:27.758896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:02.525 [2024-11-20 09:22:27.758912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:02.525 [2024-11-20 09:22:27.759228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.525 BaseBdev3 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.525 [ 00:10:02.525 { 00:10:02.525 "name": "BaseBdev3", 00:10:02.525 "aliases": [ 00:10:02.525 "e014f32e-aaea-432d-baff-ce63b36b4f9e" 00:10:02.525 ], 00:10:02.525 "product_name": "Malloc disk", 00:10:02.525 "block_size": 512, 00:10:02.525 "num_blocks": 65536, 00:10:02.525 "uuid": "e014f32e-aaea-432d-baff-ce63b36b4f9e", 00:10:02.525 "assigned_rate_limits": { 00:10:02.525 "rw_ios_per_sec": 0, 00:10:02.525 "rw_mbytes_per_sec": 0, 00:10:02.525 "r_mbytes_per_sec": 0, 00:10:02.525 "w_mbytes_per_sec": 0 00:10:02.525 }, 00:10:02.525 "claimed": true, 00:10:02.525 "claim_type": "exclusive_write", 00:10:02.525 "zoned": false, 00:10:02.525 "supported_io_types": { 00:10:02.525 "read": true, 00:10:02.525 "write": true, 00:10:02.525 "unmap": true, 00:10:02.525 "flush": true, 00:10:02.525 "reset": true, 00:10:02.525 "nvme_admin": false, 00:10:02.525 "nvme_io": false, 00:10:02.525 "nvme_io_md": false, 00:10:02.525 "write_zeroes": true, 00:10:02.525 "zcopy": true, 00:10:02.525 "get_zone_info": false, 00:10:02.525 "zone_management": false, 00:10:02.525 "zone_append": false, 00:10:02.525 "compare": false, 00:10:02.525 "compare_and_write": false, 00:10:02.525 "abort": true, 00:10:02.525 "seek_hole": false, 00:10:02.525 "seek_data": false, 00:10:02.525 "copy": true, 00:10:02.525 "nvme_iov_md": false 00:10:02.525 }, 00:10:02.525 "memory_domains": [ 00:10:02.525 { 00:10:02.525 "dma_device_id": "system", 00:10:02.525 "dma_device_type": 1 00:10:02.525 }, 00:10:02.525 { 00:10:02.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.525 "dma_device_type": 2 00:10:02.525 } 00:10:02.525 ], 00:10:02.525 "driver_specific": {} 00:10:02.525 } 00:10:02.525 ] 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.525 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.526 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.526 "name": "Existed_Raid", 00:10:02.526 "uuid": "3f653d9e-3f7b-4f3e-90fe-6bd8270f2fe3", 00:10:02.526 "strip_size_kb": 0, 00:10:02.526 "state": "online", 00:10:02.526 "raid_level": "raid1", 00:10:02.526 "superblock": false, 00:10:02.526 "num_base_bdevs": 3, 00:10:02.526 "num_base_bdevs_discovered": 3, 00:10:02.526 "num_base_bdevs_operational": 3, 00:10:02.526 "base_bdevs_list": [ 00:10:02.526 { 00:10:02.526 "name": "BaseBdev1", 00:10:02.526 "uuid": "68bca5a0-944a-4afa-8805-9a18ddbc7635", 00:10:02.526 "is_configured": true, 00:10:02.526 "data_offset": 0, 00:10:02.526 "data_size": 65536 00:10:02.526 }, 00:10:02.526 { 00:10:02.526 "name": "BaseBdev2", 00:10:02.526 "uuid": "8cef7aaa-c947-415d-8625-47ec598df59a", 00:10:02.526 "is_configured": true, 00:10:02.526 "data_offset": 0, 00:10:02.526 "data_size": 65536 00:10:02.526 }, 00:10:02.526 { 00:10:02.526 "name": "BaseBdev3", 00:10:02.526 "uuid": "e014f32e-aaea-432d-baff-ce63b36b4f9e", 00:10:02.526 "is_configured": true, 00:10:02.526 "data_offset": 0, 00:10:02.526 "data_size": 65536 00:10:02.526 } 00:10:02.526 ] 00:10:02.526 }' 00:10:02.526 09:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.526 09:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.786 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.786 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.786 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.786 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.786 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.786 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.046 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.046 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.046 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.046 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.046 [2024-11-20 09:22:28.245716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.046 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.046 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.046 "name": "Existed_Raid", 00:10:03.046 "aliases": [ 00:10:03.046 "3f653d9e-3f7b-4f3e-90fe-6bd8270f2fe3" 00:10:03.046 ], 00:10:03.046 "product_name": "Raid Volume", 00:10:03.046 "block_size": 512, 00:10:03.046 "num_blocks": 65536, 00:10:03.046 "uuid": "3f653d9e-3f7b-4f3e-90fe-6bd8270f2fe3", 00:10:03.046 "assigned_rate_limits": { 00:10:03.046 "rw_ios_per_sec": 0, 00:10:03.046 "rw_mbytes_per_sec": 0, 00:10:03.046 "r_mbytes_per_sec": 0, 00:10:03.046 "w_mbytes_per_sec": 0 00:10:03.046 }, 00:10:03.046 "claimed": false, 00:10:03.046 "zoned": false, 00:10:03.046 "supported_io_types": { 00:10:03.046 "read": true, 00:10:03.046 "write": true, 00:10:03.046 "unmap": false, 00:10:03.046 "flush": false, 00:10:03.046 "reset": true, 00:10:03.046 "nvme_admin": false, 00:10:03.046 "nvme_io": false, 00:10:03.046 "nvme_io_md": false, 00:10:03.046 "write_zeroes": true, 00:10:03.046 "zcopy": false, 00:10:03.046 "get_zone_info": false, 00:10:03.046 "zone_management": false, 00:10:03.046 "zone_append": false, 00:10:03.046 "compare": false, 00:10:03.046 "compare_and_write": false, 00:10:03.046 "abort": false, 00:10:03.046 "seek_hole": false, 00:10:03.046 "seek_data": false, 00:10:03.046 "copy": false, 00:10:03.046 "nvme_iov_md": false 00:10:03.046 }, 00:10:03.046 "memory_domains": [ 00:10:03.046 { 00:10:03.046 "dma_device_id": "system", 00:10:03.047 "dma_device_type": 1 00:10:03.047 }, 00:10:03.047 { 00:10:03.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.047 "dma_device_type": 2 00:10:03.047 }, 00:10:03.047 { 00:10:03.047 "dma_device_id": "system", 00:10:03.047 "dma_device_type": 1 00:10:03.047 }, 00:10:03.047 { 00:10:03.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.047 "dma_device_type": 2 00:10:03.047 }, 00:10:03.047 { 00:10:03.047 "dma_device_id": "system", 00:10:03.047 "dma_device_type": 1 00:10:03.047 }, 00:10:03.047 { 00:10:03.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.047 "dma_device_type": 2 00:10:03.047 } 00:10:03.047 ], 00:10:03.047 "driver_specific": { 00:10:03.047 "raid": { 00:10:03.047 "uuid": "3f653d9e-3f7b-4f3e-90fe-6bd8270f2fe3", 00:10:03.047 "strip_size_kb": 0, 00:10:03.047 "state": "online", 00:10:03.047 "raid_level": "raid1", 00:10:03.047 "superblock": false, 00:10:03.047 "num_base_bdevs": 3, 00:10:03.047 "num_base_bdevs_discovered": 3, 00:10:03.047 "num_base_bdevs_operational": 3, 00:10:03.047 "base_bdevs_list": [ 00:10:03.047 { 00:10:03.047 "name": "BaseBdev1", 00:10:03.047 "uuid": "68bca5a0-944a-4afa-8805-9a18ddbc7635", 00:10:03.047 "is_configured": true, 00:10:03.047 "data_offset": 0, 00:10:03.047 "data_size": 65536 00:10:03.047 }, 00:10:03.047 { 00:10:03.047 "name": "BaseBdev2", 00:10:03.047 "uuid": "8cef7aaa-c947-415d-8625-47ec598df59a", 00:10:03.047 "is_configured": true, 00:10:03.047 "data_offset": 0, 00:10:03.047 "data_size": 65536 00:10:03.047 }, 00:10:03.047 { 00:10:03.047 "name": "BaseBdev3", 00:10:03.047 "uuid": "e014f32e-aaea-432d-baff-ce63b36b4f9e", 00:10:03.047 "is_configured": true, 00:10:03.047 "data_offset": 0, 00:10:03.047 "data_size": 65536 00:10:03.047 } 00:10:03.047 ] 00:10:03.047 } 00:10:03.047 } 00:10:03.047 }' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:03.047 BaseBdev2 00:10:03.047 BaseBdev3' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.047 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.307 [2024-11-20 09:22:28.528963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.307 "name": "Existed_Raid", 00:10:03.307 "uuid": "3f653d9e-3f7b-4f3e-90fe-6bd8270f2fe3", 00:10:03.307 "strip_size_kb": 0, 00:10:03.307 "state": "online", 00:10:03.307 "raid_level": "raid1", 00:10:03.307 "superblock": false, 00:10:03.307 "num_base_bdevs": 3, 00:10:03.307 "num_base_bdevs_discovered": 2, 00:10:03.307 "num_base_bdevs_operational": 2, 00:10:03.307 "base_bdevs_list": [ 00:10:03.307 { 00:10:03.307 "name": null, 00:10:03.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.307 "is_configured": false, 00:10:03.307 "data_offset": 0, 00:10:03.307 "data_size": 65536 00:10:03.307 }, 00:10:03.307 { 00:10:03.307 "name": "BaseBdev2", 00:10:03.307 "uuid": "8cef7aaa-c947-415d-8625-47ec598df59a", 00:10:03.307 "is_configured": true, 00:10:03.307 "data_offset": 0, 00:10:03.307 "data_size": 65536 00:10:03.307 }, 00:10:03.307 { 00:10:03.307 "name": "BaseBdev3", 00:10:03.307 "uuid": "e014f32e-aaea-432d-baff-ce63b36b4f9e", 00:10:03.307 "is_configured": true, 00:10:03.307 "data_offset": 0, 00:10:03.307 "data_size": 65536 00:10:03.307 } 00:10:03.307 ] 00:10:03.307 }' 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.307 09:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.876 [2024-11-20 09:22:29.143590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.876 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.876 [2024-11-20 09:22:29.312768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:03.876 [2024-11-20 09:22:29.312893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.134 [2024-11-20 09:22:29.429602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.134 [2024-11-20 09:22:29.429764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.134 [2024-11-20 09:22:29.429787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.134 BaseBdev2 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.134 [ 00:10:04.134 { 00:10:04.134 "name": "BaseBdev2", 00:10:04.134 "aliases": [ 00:10:04.134 "5ba49311-f20a-485a-9607-9dc10ec0b34e" 00:10:04.134 ], 00:10:04.134 "product_name": "Malloc disk", 00:10:04.134 "block_size": 512, 00:10:04.134 "num_blocks": 65536, 00:10:04.134 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:04.134 "assigned_rate_limits": { 00:10:04.134 "rw_ios_per_sec": 0, 00:10:04.134 "rw_mbytes_per_sec": 0, 00:10:04.134 "r_mbytes_per_sec": 0, 00:10:04.134 "w_mbytes_per_sec": 0 00:10:04.134 }, 00:10:04.134 "claimed": false, 00:10:04.134 "zoned": false, 00:10:04.134 "supported_io_types": { 00:10:04.134 "read": true, 00:10:04.134 "write": true, 00:10:04.134 "unmap": true, 00:10:04.134 "flush": true, 00:10:04.134 "reset": true, 00:10:04.134 "nvme_admin": false, 00:10:04.134 "nvme_io": false, 00:10:04.134 "nvme_io_md": false, 00:10:04.134 "write_zeroes": true, 00:10:04.134 "zcopy": true, 00:10:04.134 "get_zone_info": false, 00:10:04.134 "zone_management": false, 00:10:04.134 "zone_append": false, 00:10:04.134 "compare": false, 00:10:04.134 "compare_and_write": false, 00:10:04.134 "abort": true, 00:10:04.134 "seek_hole": false, 00:10:04.134 "seek_data": false, 00:10:04.134 "copy": true, 00:10:04.134 "nvme_iov_md": false 00:10:04.134 }, 00:10:04.134 "memory_domains": [ 00:10:04.134 { 00:10:04.134 "dma_device_id": "system", 00:10:04.134 "dma_device_type": 1 00:10:04.134 }, 00:10:04.134 { 00:10:04.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.134 "dma_device_type": 2 00:10:04.134 } 00:10:04.134 ], 00:10:04.134 "driver_specific": {} 00:10:04.134 } 00:10:04.134 ] 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.134 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.393 BaseBdev3 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.393 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.393 [ 00:10:04.393 { 00:10:04.393 "name": "BaseBdev3", 00:10:04.393 "aliases": [ 00:10:04.393 "b69bf418-000c-408e-8cf4-bd70c9019c45" 00:10:04.393 ], 00:10:04.393 "product_name": "Malloc disk", 00:10:04.393 "block_size": 512, 00:10:04.393 "num_blocks": 65536, 00:10:04.393 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:04.393 "assigned_rate_limits": { 00:10:04.393 "rw_ios_per_sec": 0, 00:10:04.393 "rw_mbytes_per_sec": 0, 00:10:04.393 "r_mbytes_per_sec": 0, 00:10:04.393 "w_mbytes_per_sec": 0 00:10:04.393 }, 00:10:04.393 "claimed": false, 00:10:04.393 "zoned": false, 00:10:04.393 "supported_io_types": { 00:10:04.393 "read": true, 00:10:04.393 "write": true, 00:10:04.393 "unmap": true, 00:10:04.394 "flush": true, 00:10:04.394 "reset": true, 00:10:04.394 "nvme_admin": false, 00:10:04.394 "nvme_io": false, 00:10:04.394 "nvme_io_md": false, 00:10:04.394 "write_zeroes": true, 00:10:04.394 "zcopy": true, 00:10:04.394 "get_zone_info": false, 00:10:04.394 "zone_management": false, 00:10:04.394 "zone_append": false, 00:10:04.394 "compare": false, 00:10:04.394 "compare_and_write": false, 00:10:04.394 "abort": true, 00:10:04.394 "seek_hole": false, 00:10:04.394 "seek_data": false, 00:10:04.394 "copy": true, 00:10:04.394 "nvme_iov_md": false 00:10:04.394 }, 00:10:04.394 "memory_domains": [ 00:10:04.394 { 00:10:04.394 "dma_device_id": "system", 00:10:04.394 "dma_device_type": 1 00:10:04.394 }, 00:10:04.394 { 00:10:04.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.394 "dma_device_type": 2 00:10:04.394 } 00:10:04.394 ], 00:10:04.394 "driver_specific": {} 00:10:04.394 } 00:10:04.394 ] 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.394 [2024-11-20 09:22:29.648940] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.394 [2024-11-20 09:22:29.649118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.394 [2024-11-20 09:22:29.649204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.394 [2024-11-20 09:22:29.651688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.394 "name": "Existed_Raid", 00:10:04.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.394 "strip_size_kb": 0, 00:10:04.394 "state": "configuring", 00:10:04.394 "raid_level": "raid1", 00:10:04.394 "superblock": false, 00:10:04.394 "num_base_bdevs": 3, 00:10:04.394 "num_base_bdevs_discovered": 2, 00:10:04.394 "num_base_bdevs_operational": 3, 00:10:04.394 "base_bdevs_list": [ 00:10:04.394 { 00:10:04.394 "name": "BaseBdev1", 00:10:04.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.394 "is_configured": false, 00:10:04.394 "data_offset": 0, 00:10:04.394 "data_size": 0 00:10:04.394 }, 00:10:04.394 { 00:10:04.394 "name": "BaseBdev2", 00:10:04.394 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:04.394 "is_configured": true, 00:10:04.394 "data_offset": 0, 00:10:04.394 "data_size": 65536 00:10:04.394 }, 00:10:04.394 { 00:10:04.394 "name": "BaseBdev3", 00:10:04.394 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:04.394 "is_configured": true, 00:10:04.394 "data_offset": 0, 00:10:04.394 "data_size": 65536 00:10:04.394 } 00:10:04.394 ] 00:10:04.394 }' 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.394 09:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.962 [2024-11-20 09:22:30.140227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.962 "name": "Existed_Raid", 00:10:04.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.962 "strip_size_kb": 0, 00:10:04.962 "state": "configuring", 00:10:04.962 "raid_level": "raid1", 00:10:04.962 "superblock": false, 00:10:04.962 "num_base_bdevs": 3, 00:10:04.962 "num_base_bdevs_discovered": 1, 00:10:04.962 "num_base_bdevs_operational": 3, 00:10:04.962 "base_bdevs_list": [ 00:10:04.962 { 00:10:04.962 "name": "BaseBdev1", 00:10:04.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.962 "is_configured": false, 00:10:04.962 "data_offset": 0, 00:10:04.962 "data_size": 0 00:10:04.962 }, 00:10:04.962 { 00:10:04.962 "name": null, 00:10:04.962 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:04.962 "is_configured": false, 00:10:04.962 "data_offset": 0, 00:10:04.962 "data_size": 65536 00:10:04.962 }, 00:10:04.962 { 00:10:04.962 "name": "BaseBdev3", 00:10:04.962 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:04.962 "is_configured": true, 00:10:04.962 "data_offset": 0, 00:10:04.962 "data_size": 65536 00:10:04.962 } 00:10:04.962 ] 00:10:04.962 }' 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.962 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.222 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.222 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.222 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.222 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.222 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.222 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:05.222 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.222 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.222 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.483 [2024-11-20 09:22:30.691865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.483 BaseBdev1 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.483 [ 00:10:05.483 { 00:10:05.483 "name": "BaseBdev1", 00:10:05.483 "aliases": [ 00:10:05.483 "5d280489-f5af-4144-940b-49120991711b" 00:10:05.483 ], 00:10:05.483 "product_name": "Malloc disk", 00:10:05.483 "block_size": 512, 00:10:05.483 "num_blocks": 65536, 00:10:05.483 "uuid": "5d280489-f5af-4144-940b-49120991711b", 00:10:05.483 "assigned_rate_limits": { 00:10:05.483 "rw_ios_per_sec": 0, 00:10:05.483 "rw_mbytes_per_sec": 0, 00:10:05.483 "r_mbytes_per_sec": 0, 00:10:05.483 "w_mbytes_per_sec": 0 00:10:05.483 }, 00:10:05.483 "claimed": true, 00:10:05.483 "claim_type": "exclusive_write", 00:10:05.483 "zoned": false, 00:10:05.483 "supported_io_types": { 00:10:05.483 "read": true, 00:10:05.483 "write": true, 00:10:05.483 "unmap": true, 00:10:05.483 "flush": true, 00:10:05.483 "reset": true, 00:10:05.483 "nvme_admin": false, 00:10:05.483 "nvme_io": false, 00:10:05.483 "nvme_io_md": false, 00:10:05.483 "write_zeroes": true, 00:10:05.483 "zcopy": true, 00:10:05.483 "get_zone_info": false, 00:10:05.483 "zone_management": false, 00:10:05.483 "zone_append": false, 00:10:05.483 "compare": false, 00:10:05.483 "compare_and_write": false, 00:10:05.483 "abort": true, 00:10:05.483 "seek_hole": false, 00:10:05.483 "seek_data": false, 00:10:05.483 "copy": true, 00:10:05.483 "nvme_iov_md": false 00:10:05.483 }, 00:10:05.483 "memory_domains": [ 00:10:05.483 { 00:10:05.483 "dma_device_id": "system", 00:10:05.483 "dma_device_type": 1 00:10:05.483 }, 00:10:05.483 { 00:10:05.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.483 "dma_device_type": 2 00:10:05.483 } 00:10:05.483 ], 00:10:05.483 "driver_specific": {} 00:10:05.483 } 00:10:05.483 ] 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.483 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.483 "name": "Existed_Raid", 00:10:05.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.483 "strip_size_kb": 0, 00:10:05.483 "state": "configuring", 00:10:05.483 "raid_level": "raid1", 00:10:05.483 "superblock": false, 00:10:05.483 "num_base_bdevs": 3, 00:10:05.484 "num_base_bdevs_discovered": 2, 00:10:05.484 "num_base_bdevs_operational": 3, 00:10:05.484 "base_bdevs_list": [ 00:10:05.484 { 00:10:05.484 "name": "BaseBdev1", 00:10:05.484 "uuid": "5d280489-f5af-4144-940b-49120991711b", 00:10:05.484 "is_configured": true, 00:10:05.484 "data_offset": 0, 00:10:05.484 "data_size": 65536 00:10:05.484 }, 00:10:05.484 { 00:10:05.484 "name": null, 00:10:05.484 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:05.484 "is_configured": false, 00:10:05.484 "data_offset": 0, 00:10:05.484 "data_size": 65536 00:10:05.484 }, 00:10:05.484 { 00:10:05.484 "name": "BaseBdev3", 00:10:05.484 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:05.484 "is_configured": true, 00:10:05.484 "data_offset": 0, 00:10:05.484 "data_size": 65536 00:10:05.484 } 00:10:05.484 ] 00:10:05.484 }' 00:10:05.484 09:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.484 09:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.744 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.744 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.744 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.744 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.004 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.004 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:06.004 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:06.004 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.004 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.004 [2024-11-20 09:22:31.243069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.004 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.004 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.004 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.005 "name": "Existed_Raid", 00:10:06.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.005 "strip_size_kb": 0, 00:10:06.005 "state": "configuring", 00:10:06.005 "raid_level": "raid1", 00:10:06.005 "superblock": false, 00:10:06.005 "num_base_bdevs": 3, 00:10:06.005 "num_base_bdevs_discovered": 1, 00:10:06.005 "num_base_bdevs_operational": 3, 00:10:06.005 "base_bdevs_list": [ 00:10:06.005 { 00:10:06.005 "name": "BaseBdev1", 00:10:06.005 "uuid": "5d280489-f5af-4144-940b-49120991711b", 00:10:06.005 "is_configured": true, 00:10:06.005 "data_offset": 0, 00:10:06.005 "data_size": 65536 00:10:06.005 }, 00:10:06.005 { 00:10:06.005 "name": null, 00:10:06.005 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:06.005 "is_configured": false, 00:10:06.005 "data_offset": 0, 00:10:06.005 "data_size": 65536 00:10:06.005 }, 00:10:06.005 { 00:10:06.005 "name": null, 00:10:06.005 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:06.005 "is_configured": false, 00:10:06.005 "data_offset": 0, 00:10:06.005 "data_size": 65536 00:10:06.005 } 00:10:06.005 ] 00:10:06.005 }' 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.005 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.575 [2024-11-20 09:22:31.782321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.575 "name": "Existed_Raid", 00:10:06.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.575 "strip_size_kb": 0, 00:10:06.575 "state": "configuring", 00:10:06.575 "raid_level": "raid1", 00:10:06.575 "superblock": false, 00:10:06.575 "num_base_bdevs": 3, 00:10:06.575 "num_base_bdevs_discovered": 2, 00:10:06.575 "num_base_bdevs_operational": 3, 00:10:06.575 "base_bdevs_list": [ 00:10:06.575 { 00:10:06.575 "name": "BaseBdev1", 00:10:06.575 "uuid": "5d280489-f5af-4144-940b-49120991711b", 00:10:06.575 "is_configured": true, 00:10:06.575 "data_offset": 0, 00:10:06.575 "data_size": 65536 00:10:06.575 }, 00:10:06.575 { 00:10:06.575 "name": null, 00:10:06.575 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:06.575 "is_configured": false, 00:10:06.575 "data_offset": 0, 00:10:06.575 "data_size": 65536 00:10:06.575 }, 00:10:06.575 { 00:10:06.575 "name": "BaseBdev3", 00:10:06.575 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:06.575 "is_configured": true, 00:10:06.575 "data_offset": 0, 00:10:06.575 "data_size": 65536 00:10:06.575 } 00:10:06.575 ] 00:10:06.575 }' 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.575 09:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.845 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.845 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.845 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.845 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:06.845 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.103 [2024-11-20 09:22:32.313406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.103 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.103 "name": "Existed_Raid", 00:10:07.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.103 "strip_size_kb": 0, 00:10:07.103 "state": "configuring", 00:10:07.103 "raid_level": "raid1", 00:10:07.103 "superblock": false, 00:10:07.103 "num_base_bdevs": 3, 00:10:07.103 "num_base_bdevs_discovered": 1, 00:10:07.103 "num_base_bdevs_operational": 3, 00:10:07.103 "base_bdevs_list": [ 00:10:07.103 { 00:10:07.103 "name": null, 00:10:07.103 "uuid": "5d280489-f5af-4144-940b-49120991711b", 00:10:07.103 "is_configured": false, 00:10:07.103 "data_offset": 0, 00:10:07.103 "data_size": 65536 00:10:07.103 }, 00:10:07.104 { 00:10:07.104 "name": null, 00:10:07.104 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:07.104 "is_configured": false, 00:10:07.104 "data_offset": 0, 00:10:07.104 "data_size": 65536 00:10:07.104 }, 00:10:07.104 { 00:10:07.104 "name": "BaseBdev3", 00:10:07.104 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:07.104 "is_configured": true, 00:10:07.104 "data_offset": 0, 00:10:07.104 "data_size": 65536 00:10:07.104 } 00:10:07.104 ] 00:10:07.104 }' 00:10:07.104 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.104 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.673 [2024-11-20 09:22:32.934306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.673 "name": "Existed_Raid", 00:10:07.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.673 "strip_size_kb": 0, 00:10:07.673 "state": "configuring", 00:10:07.673 "raid_level": "raid1", 00:10:07.673 "superblock": false, 00:10:07.673 "num_base_bdevs": 3, 00:10:07.673 "num_base_bdevs_discovered": 2, 00:10:07.673 "num_base_bdevs_operational": 3, 00:10:07.673 "base_bdevs_list": [ 00:10:07.673 { 00:10:07.673 "name": null, 00:10:07.673 "uuid": "5d280489-f5af-4144-940b-49120991711b", 00:10:07.673 "is_configured": false, 00:10:07.673 "data_offset": 0, 00:10:07.673 "data_size": 65536 00:10:07.673 }, 00:10:07.673 { 00:10:07.673 "name": "BaseBdev2", 00:10:07.673 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:07.673 "is_configured": true, 00:10:07.673 "data_offset": 0, 00:10:07.673 "data_size": 65536 00:10:07.673 }, 00:10:07.673 { 00:10:07.673 "name": "BaseBdev3", 00:10:07.673 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:07.673 "is_configured": true, 00:10:07.673 "data_offset": 0, 00:10:07.673 "data_size": 65536 00:10:07.673 } 00:10:07.673 ] 00:10:07.673 }' 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.673 09:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5d280489-f5af-4144-940b-49120991711b 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.242 [2024-11-20 09:22:33.545411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:08.242 [2024-11-20 09:22:33.545513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:08.242 [2024-11-20 09:22:33.545523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:08.242 [2024-11-20 09:22:33.545819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:08.242 [2024-11-20 09:22:33.546018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:08.242 [2024-11-20 09:22:33.546040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:08.242 [2024-11-20 09:22:33.546340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.242 NewBaseBdev 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.242 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.242 [ 00:10:08.242 { 00:10:08.242 "name": "NewBaseBdev", 00:10:08.242 "aliases": [ 00:10:08.242 "5d280489-f5af-4144-940b-49120991711b" 00:10:08.242 ], 00:10:08.242 "product_name": "Malloc disk", 00:10:08.242 "block_size": 512, 00:10:08.242 "num_blocks": 65536, 00:10:08.242 "uuid": "5d280489-f5af-4144-940b-49120991711b", 00:10:08.242 "assigned_rate_limits": { 00:10:08.242 "rw_ios_per_sec": 0, 00:10:08.242 "rw_mbytes_per_sec": 0, 00:10:08.242 "r_mbytes_per_sec": 0, 00:10:08.242 "w_mbytes_per_sec": 0 00:10:08.242 }, 00:10:08.242 "claimed": true, 00:10:08.242 "claim_type": "exclusive_write", 00:10:08.242 "zoned": false, 00:10:08.242 "supported_io_types": { 00:10:08.242 "read": true, 00:10:08.242 "write": true, 00:10:08.242 "unmap": true, 00:10:08.242 "flush": true, 00:10:08.242 "reset": true, 00:10:08.242 "nvme_admin": false, 00:10:08.242 "nvme_io": false, 00:10:08.243 "nvme_io_md": false, 00:10:08.243 "write_zeroes": true, 00:10:08.243 "zcopy": true, 00:10:08.243 "get_zone_info": false, 00:10:08.243 "zone_management": false, 00:10:08.243 "zone_append": false, 00:10:08.243 "compare": false, 00:10:08.243 "compare_and_write": false, 00:10:08.243 "abort": true, 00:10:08.243 "seek_hole": false, 00:10:08.243 "seek_data": false, 00:10:08.243 "copy": true, 00:10:08.243 "nvme_iov_md": false 00:10:08.243 }, 00:10:08.243 "memory_domains": [ 00:10:08.243 { 00:10:08.243 "dma_device_id": "system", 00:10:08.243 "dma_device_type": 1 00:10:08.243 }, 00:10:08.243 { 00:10:08.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.243 "dma_device_type": 2 00:10:08.243 } 00:10:08.243 ], 00:10:08.243 "driver_specific": {} 00:10:08.243 } 00:10:08.243 ] 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.243 "name": "Existed_Raid", 00:10:08.243 "uuid": "64b55ab9-31d1-4077-b661-5becdeb28b48", 00:10:08.243 "strip_size_kb": 0, 00:10:08.243 "state": "online", 00:10:08.243 "raid_level": "raid1", 00:10:08.243 "superblock": false, 00:10:08.243 "num_base_bdevs": 3, 00:10:08.243 "num_base_bdevs_discovered": 3, 00:10:08.243 "num_base_bdevs_operational": 3, 00:10:08.243 "base_bdevs_list": [ 00:10:08.243 { 00:10:08.243 "name": "NewBaseBdev", 00:10:08.243 "uuid": "5d280489-f5af-4144-940b-49120991711b", 00:10:08.243 "is_configured": true, 00:10:08.243 "data_offset": 0, 00:10:08.243 "data_size": 65536 00:10:08.243 }, 00:10:08.243 { 00:10:08.243 "name": "BaseBdev2", 00:10:08.243 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:08.243 "is_configured": true, 00:10:08.243 "data_offset": 0, 00:10:08.243 "data_size": 65536 00:10:08.243 }, 00:10:08.243 { 00:10:08.243 "name": "BaseBdev3", 00:10:08.243 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:08.243 "is_configured": true, 00:10:08.243 "data_offset": 0, 00:10:08.243 "data_size": 65536 00:10:08.243 } 00:10:08.243 ] 00:10:08.243 }' 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.243 09:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.813 [2024-11-20 09:22:34.025041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.813 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.813 "name": "Existed_Raid", 00:10:08.813 "aliases": [ 00:10:08.813 "64b55ab9-31d1-4077-b661-5becdeb28b48" 00:10:08.813 ], 00:10:08.813 "product_name": "Raid Volume", 00:10:08.813 "block_size": 512, 00:10:08.813 "num_blocks": 65536, 00:10:08.813 "uuid": "64b55ab9-31d1-4077-b661-5becdeb28b48", 00:10:08.813 "assigned_rate_limits": { 00:10:08.814 "rw_ios_per_sec": 0, 00:10:08.814 "rw_mbytes_per_sec": 0, 00:10:08.814 "r_mbytes_per_sec": 0, 00:10:08.814 "w_mbytes_per_sec": 0 00:10:08.814 }, 00:10:08.814 "claimed": false, 00:10:08.814 "zoned": false, 00:10:08.814 "supported_io_types": { 00:10:08.814 "read": true, 00:10:08.814 "write": true, 00:10:08.814 "unmap": false, 00:10:08.814 "flush": false, 00:10:08.814 "reset": true, 00:10:08.814 "nvme_admin": false, 00:10:08.814 "nvme_io": false, 00:10:08.814 "nvme_io_md": false, 00:10:08.814 "write_zeroes": true, 00:10:08.814 "zcopy": false, 00:10:08.814 "get_zone_info": false, 00:10:08.814 "zone_management": false, 00:10:08.814 "zone_append": false, 00:10:08.814 "compare": false, 00:10:08.814 "compare_and_write": false, 00:10:08.814 "abort": false, 00:10:08.814 "seek_hole": false, 00:10:08.814 "seek_data": false, 00:10:08.814 "copy": false, 00:10:08.814 "nvme_iov_md": false 00:10:08.814 }, 00:10:08.814 "memory_domains": [ 00:10:08.814 { 00:10:08.814 "dma_device_id": "system", 00:10:08.814 "dma_device_type": 1 00:10:08.814 }, 00:10:08.814 { 00:10:08.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.814 "dma_device_type": 2 00:10:08.814 }, 00:10:08.814 { 00:10:08.814 "dma_device_id": "system", 00:10:08.814 "dma_device_type": 1 00:10:08.814 }, 00:10:08.814 { 00:10:08.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.814 "dma_device_type": 2 00:10:08.814 }, 00:10:08.814 { 00:10:08.814 "dma_device_id": "system", 00:10:08.814 "dma_device_type": 1 00:10:08.814 }, 00:10:08.814 { 00:10:08.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.814 "dma_device_type": 2 00:10:08.814 } 00:10:08.814 ], 00:10:08.814 "driver_specific": { 00:10:08.814 "raid": { 00:10:08.814 "uuid": "64b55ab9-31d1-4077-b661-5becdeb28b48", 00:10:08.814 "strip_size_kb": 0, 00:10:08.814 "state": "online", 00:10:08.814 "raid_level": "raid1", 00:10:08.814 "superblock": false, 00:10:08.814 "num_base_bdevs": 3, 00:10:08.814 "num_base_bdevs_discovered": 3, 00:10:08.814 "num_base_bdevs_operational": 3, 00:10:08.814 "base_bdevs_list": [ 00:10:08.814 { 00:10:08.814 "name": "NewBaseBdev", 00:10:08.814 "uuid": "5d280489-f5af-4144-940b-49120991711b", 00:10:08.814 "is_configured": true, 00:10:08.814 "data_offset": 0, 00:10:08.814 "data_size": 65536 00:10:08.814 }, 00:10:08.814 { 00:10:08.814 "name": "BaseBdev2", 00:10:08.814 "uuid": "5ba49311-f20a-485a-9607-9dc10ec0b34e", 00:10:08.814 "is_configured": true, 00:10:08.814 "data_offset": 0, 00:10:08.814 "data_size": 65536 00:10:08.814 }, 00:10:08.814 { 00:10:08.814 "name": "BaseBdev3", 00:10:08.814 "uuid": "b69bf418-000c-408e-8cf4-bd70c9019c45", 00:10:08.814 "is_configured": true, 00:10:08.814 "data_offset": 0, 00:10:08.814 "data_size": 65536 00:10:08.814 } 00:10:08.814 ] 00:10:08.814 } 00:10:08.814 } 00:10:08.814 }' 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:08.814 BaseBdev2 00:10:08.814 BaseBdev3' 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.814 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.073 [2024-11-20 09:22:34.324200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.073 [2024-11-20 09:22:34.324244] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.073 [2024-11-20 09:22:34.324343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.073 [2024-11-20 09:22:34.324699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.073 [2024-11-20 09:22:34.324722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67700 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67700 ']' 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67700 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67700 00:10:09.073 killing process with pid 67700 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67700' 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67700 00:10:09.073 [2024-11-20 09:22:34.367020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.073 09:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67700 00:10:09.332 [2024-11-20 09:22:34.736788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.708 ************************************ 00:10:10.708 END TEST raid_state_function_test 00:10:10.708 ************************************ 00:10:10.708 09:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:10.708 00:10:10.708 real 0m11.468s 00:10:10.708 user 0m18.169s 00:10:10.708 sys 0m1.813s 00:10:10.708 09:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.708 09:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.708 09:22:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:10.708 09:22:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:10.708 09:22:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.708 09:22:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.708 ************************************ 00:10:10.708 START TEST raid_state_function_test_sb 00:10:10.708 ************************************ 00:10:10.708 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:10.708 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:10.708 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:10.708 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:10.708 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68331 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:10.709 Process raid pid: 68331 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68331' 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68331 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68331 ']' 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.709 09:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.967 [2024-11-20 09:22:36.236613] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:10:10.967 [2024-11-20 09:22:36.236841] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.255 [2024-11-20 09:22:36.422204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.255 [2024-11-20 09:22:36.566741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.521 [2024-11-20 09:22:36.825402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.521 [2024-11-20 09:22:36.825487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.780 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.780 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:11.780 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:11.780 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.780 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.781 [2024-11-20 09:22:37.151721] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.781 [2024-11-20 09:22:37.151816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.781 [2024-11-20 09:22:37.151832] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.781 [2024-11-20 09:22:37.151847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.781 [2024-11-20 09:22:37.151857] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.781 [2024-11-20 09:22:37.151872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.781 "name": "Existed_Raid", 00:10:11.781 "uuid": "12da14f6-bcdf-466a-8fab-aae6d1f478ab", 00:10:11.781 "strip_size_kb": 0, 00:10:11.781 "state": "configuring", 00:10:11.781 "raid_level": "raid1", 00:10:11.781 "superblock": true, 00:10:11.781 "num_base_bdevs": 3, 00:10:11.781 "num_base_bdevs_discovered": 0, 00:10:11.781 "num_base_bdevs_operational": 3, 00:10:11.781 "base_bdevs_list": [ 00:10:11.781 { 00:10:11.781 "name": "BaseBdev1", 00:10:11.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.781 "is_configured": false, 00:10:11.781 "data_offset": 0, 00:10:11.781 "data_size": 0 00:10:11.781 }, 00:10:11.781 { 00:10:11.781 "name": "BaseBdev2", 00:10:11.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.781 "is_configured": false, 00:10:11.781 "data_offset": 0, 00:10:11.781 "data_size": 0 00:10:11.781 }, 00:10:11.781 { 00:10:11.781 "name": "BaseBdev3", 00:10:11.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.781 "is_configured": false, 00:10:11.781 "data_offset": 0, 00:10:11.781 "data_size": 0 00:10:11.781 } 00:10:11.781 ] 00:10:11.781 }' 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.781 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.348 [2024-11-20 09:22:37.587677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.348 [2024-11-20 09:22:37.587737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.348 [2024-11-20 09:22:37.595706] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.348 [2024-11-20 09:22:37.595797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.348 [2024-11-20 09:22:37.595812] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.348 [2024-11-20 09:22:37.595827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.348 [2024-11-20 09:22:37.595838] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.348 [2024-11-20 09:22:37.595852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.348 [2024-11-20 09:22:37.648109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.348 BaseBdev1 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.348 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.349 [ 00:10:12.349 { 00:10:12.349 "name": "BaseBdev1", 00:10:12.349 "aliases": [ 00:10:12.349 "3dd9c99d-9709-4533-9251-d9be2e2e5a6b" 00:10:12.349 ], 00:10:12.349 "product_name": "Malloc disk", 00:10:12.349 "block_size": 512, 00:10:12.349 "num_blocks": 65536, 00:10:12.349 "uuid": "3dd9c99d-9709-4533-9251-d9be2e2e5a6b", 00:10:12.349 "assigned_rate_limits": { 00:10:12.349 "rw_ios_per_sec": 0, 00:10:12.349 "rw_mbytes_per_sec": 0, 00:10:12.349 "r_mbytes_per_sec": 0, 00:10:12.349 "w_mbytes_per_sec": 0 00:10:12.349 }, 00:10:12.349 "claimed": true, 00:10:12.349 "claim_type": "exclusive_write", 00:10:12.349 "zoned": false, 00:10:12.349 "supported_io_types": { 00:10:12.349 "read": true, 00:10:12.349 "write": true, 00:10:12.349 "unmap": true, 00:10:12.349 "flush": true, 00:10:12.349 "reset": true, 00:10:12.349 "nvme_admin": false, 00:10:12.349 "nvme_io": false, 00:10:12.349 "nvme_io_md": false, 00:10:12.349 "write_zeroes": true, 00:10:12.349 "zcopy": true, 00:10:12.349 "get_zone_info": false, 00:10:12.349 "zone_management": false, 00:10:12.349 "zone_append": false, 00:10:12.349 "compare": false, 00:10:12.349 "compare_and_write": false, 00:10:12.349 "abort": true, 00:10:12.349 "seek_hole": false, 00:10:12.349 "seek_data": false, 00:10:12.349 "copy": true, 00:10:12.349 "nvme_iov_md": false 00:10:12.349 }, 00:10:12.349 "memory_domains": [ 00:10:12.349 { 00:10:12.349 "dma_device_id": "system", 00:10:12.349 "dma_device_type": 1 00:10:12.349 }, 00:10:12.349 { 00:10:12.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.349 "dma_device_type": 2 00:10:12.349 } 00:10:12.349 ], 00:10:12.349 "driver_specific": {} 00:10:12.349 } 00:10:12.349 ] 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.349 "name": "Existed_Raid", 00:10:12.349 "uuid": "24f531ae-3238-4030-8653-b1a238b63dac", 00:10:12.349 "strip_size_kb": 0, 00:10:12.349 "state": "configuring", 00:10:12.349 "raid_level": "raid1", 00:10:12.349 "superblock": true, 00:10:12.349 "num_base_bdevs": 3, 00:10:12.349 "num_base_bdevs_discovered": 1, 00:10:12.349 "num_base_bdevs_operational": 3, 00:10:12.349 "base_bdevs_list": [ 00:10:12.349 { 00:10:12.349 "name": "BaseBdev1", 00:10:12.349 "uuid": "3dd9c99d-9709-4533-9251-d9be2e2e5a6b", 00:10:12.349 "is_configured": true, 00:10:12.349 "data_offset": 2048, 00:10:12.349 "data_size": 63488 00:10:12.349 }, 00:10:12.349 { 00:10:12.349 "name": "BaseBdev2", 00:10:12.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.349 "is_configured": false, 00:10:12.349 "data_offset": 0, 00:10:12.349 "data_size": 0 00:10:12.349 }, 00:10:12.349 { 00:10:12.349 "name": "BaseBdev3", 00:10:12.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.349 "is_configured": false, 00:10:12.349 "data_offset": 0, 00:10:12.349 "data_size": 0 00:10:12.349 } 00:10:12.349 ] 00:10:12.349 }' 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.349 09:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.674 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.675 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.675 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.935 [2024-11-20 09:22:38.131979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.935 [2024-11-20 09:22:38.132064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.935 [2024-11-20 09:22:38.140114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.935 [2024-11-20 09:22:38.142424] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.935 [2024-11-20 09:22:38.142531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.935 [2024-11-20 09:22:38.142553] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.935 [2024-11-20 09:22:38.142576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.935 "name": "Existed_Raid", 00:10:12.935 "uuid": "af72899c-2525-40eb-be4f-d6f2394ec7a7", 00:10:12.935 "strip_size_kb": 0, 00:10:12.935 "state": "configuring", 00:10:12.935 "raid_level": "raid1", 00:10:12.935 "superblock": true, 00:10:12.935 "num_base_bdevs": 3, 00:10:12.935 "num_base_bdevs_discovered": 1, 00:10:12.935 "num_base_bdevs_operational": 3, 00:10:12.935 "base_bdevs_list": [ 00:10:12.935 { 00:10:12.935 "name": "BaseBdev1", 00:10:12.935 "uuid": "3dd9c99d-9709-4533-9251-d9be2e2e5a6b", 00:10:12.935 "is_configured": true, 00:10:12.935 "data_offset": 2048, 00:10:12.935 "data_size": 63488 00:10:12.935 }, 00:10:12.935 { 00:10:12.935 "name": "BaseBdev2", 00:10:12.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.935 "is_configured": false, 00:10:12.935 "data_offset": 0, 00:10:12.935 "data_size": 0 00:10:12.935 }, 00:10:12.935 { 00:10:12.935 "name": "BaseBdev3", 00:10:12.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.935 "is_configured": false, 00:10:12.935 "data_offset": 0, 00:10:12.935 "data_size": 0 00:10:12.935 } 00:10:12.935 ] 00:10:12.935 }' 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.935 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.194 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:13.194 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.194 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.195 [2024-11-20 09:22:38.630252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.195 BaseBdev2 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.195 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.195 [ 00:10:13.195 { 00:10:13.195 "name": "BaseBdev2", 00:10:13.195 "aliases": [ 00:10:13.195 "b1e26909-d9b8-4672-a4f8-3603a2573744" 00:10:13.195 ], 00:10:13.195 "product_name": "Malloc disk", 00:10:13.195 "block_size": 512, 00:10:13.195 "num_blocks": 65536, 00:10:13.195 "uuid": "b1e26909-d9b8-4672-a4f8-3603a2573744", 00:10:13.195 "assigned_rate_limits": { 00:10:13.195 "rw_ios_per_sec": 0, 00:10:13.195 "rw_mbytes_per_sec": 0, 00:10:13.195 "r_mbytes_per_sec": 0, 00:10:13.195 "w_mbytes_per_sec": 0 00:10:13.195 }, 00:10:13.195 "claimed": true, 00:10:13.195 "claim_type": "exclusive_write", 00:10:13.195 "zoned": false, 00:10:13.455 "supported_io_types": { 00:10:13.455 "read": true, 00:10:13.455 "write": true, 00:10:13.455 "unmap": true, 00:10:13.455 "flush": true, 00:10:13.455 "reset": true, 00:10:13.455 "nvme_admin": false, 00:10:13.455 "nvme_io": false, 00:10:13.455 "nvme_io_md": false, 00:10:13.455 "write_zeroes": true, 00:10:13.455 "zcopy": true, 00:10:13.455 "get_zone_info": false, 00:10:13.455 "zone_management": false, 00:10:13.455 "zone_append": false, 00:10:13.455 "compare": false, 00:10:13.455 "compare_and_write": false, 00:10:13.455 "abort": true, 00:10:13.455 "seek_hole": false, 00:10:13.455 "seek_data": false, 00:10:13.455 "copy": true, 00:10:13.455 "nvme_iov_md": false 00:10:13.455 }, 00:10:13.455 "memory_domains": [ 00:10:13.455 { 00:10:13.455 "dma_device_id": "system", 00:10:13.455 "dma_device_type": 1 00:10:13.455 }, 00:10:13.455 { 00:10:13.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.455 "dma_device_type": 2 00:10:13.455 } 00:10:13.455 ], 00:10:13.455 "driver_specific": {} 00:10:13.455 } 00:10:13.455 ] 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.455 "name": "Existed_Raid", 00:10:13.455 "uuid": "af72899c-2525-40eb-be4f-d6f2394ec7a7", 00:10:13.455 "strip_size_kb": 0, 00:10:13.455 "state": "configuring", 00:10:13.455 "raid_level": "raid1", 00:10:13.455 "superblock": true, 00:10:13.455 "num_base_bdevs": 3, 00:10:13.455 "num_base_bdevs_discovered": 2, 00:10:13.455 "num_base_bdevs_operational": 3, 00:10:13.455 "base_bdevs_list": [ 00:10:13.455 { 00:10:13.455 "name": "BaseBdev1", 00:10:13.455 "uuid": "3dd9c99d-9709-4533-9251-d9be2e2e5a6b", 00:10:13.455 "is_configured": true, 00:10:13.455 "data_offset": 2048, 00:10:13.455 "data_size": 63488 00:10:13.455 }, 00:10:13.455 { 00:10:13.455 "name": "BaseBdev2", 00:10:13.455 "uuid": "b1e26909-d9b8-4672-a4f8-3603a2573744", 00:10:13.455 "is_configured": true, 00:10:13.455 "data_offset": 2048, 00:10:13.455 "data_size": 63488 00:10:13.455 }, 00:10:13.455 { 00:10:13.455 "name": "BaseBdev3", 00:10:13.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.455 "is_configured": false, 00:10:13.455 "data_offset": 0, 00:10:13.455 "data_size": 0 00:10:13.455 } 00:10:13.455 ] 00:10:13.455 }' 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.455 09:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.736 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:13.736 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.736 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.993 [2024-11-20 09:22:39.199701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.993 [2024-11-20 09:22:39.200069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.993 [2024-11-20 09:22:39.200113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.993 BaseBdev3 00:10:13.993 [2024-11-20 09:22:39.200569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:13.993 [2024-11-20 09:22:39.200857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.993 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.993 [2024-11-20 09:22:39.200889] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:13.993 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:13.993 [2024-11-20 09:22:39.201148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.993 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:13.993 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.994 [ 00:10:13.994 { 00:10:13.994 "name": "BaseBdev3", 00:10:13.994 "aliases": [ 00:10:13.994 "b0f7cb48-f69e-4877-b62e-14a160a56ec4" 00:10:13.994 ], 00:10:13.994 "product_name": "Malloc disk", 00:10:13.994 "block_size": 512, 00:10:13.994 "num_blocks": 65536, 00:10:13.994 "uuid": "b0f7cb48-f69e-4877-b62e-14a160a56ec4", 00:10:13.994 "assigned_rate_limits": { 00:10:13.994 "rw_ios_per_sec": 0, 00:10:13.994 "rw_mbytes_per_sec": 0, 00:10:13.994 "r_mbytes_per_sec": 0, 00:10:13.994 "w_mbytes_per_sec": 0 00:10:13.994 }, 00:10:13.994 "claimed": true, 00:10:13.994 "claim_type": "exclusive_write", 00:10:13.994 "zoned": false, 00:10:13.994 "supported_io_types": { 00:10:13.994 "read": true, 00:10:13.994 "write": true, 00:10:13.994 "unmap": true, 00:10:13.994 "flush": true, 00:10:13.994 "reset": true, 00:10:13.994 "nvme_admin": false, 00:10:13.994 "nvme_io": false, 00:10:13.994 "nvme_io_md": false, 00:10:13.994 "write_zeroes": true, 00:10:13.994 "zcopy": true, 00:10:13.994 "get_zone_info": false, 00:10:13.994 "zone_management": false, 00:10:13.994 "zone_append": false, 00:10:13.994 "compare": false, 00:10:13.994 "compare_and_write": false, 00:10:13.994 "abort": true, 00:10:13.994 "seek_hole": false, 00:10:13.994 "seek_data": false, 00:10:13.994 "copy": true, 00:10:13.994 "nvme_iov_md": false 00:10:13.994 }, 00:10:13.994 "memory_domains": [ 00:10:13.994 { 00:10:13.994 "dma_device_id": "system", 00:10:13.994 "dma_device_type": 1 00:10:13.994 }, 00:10:13.994 { 00:10:13.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.994 "dma_device_type": 2 00:10:13.994 } 00:10:13.994 ], 00:10:13.994 "driver_specific": {} 00:10:13.994 } 00:10:13.994 ] 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.994 "name": "Existed_Raid", 00:10:13.994 "uuid": "af72899c-2525-40eb-be4f-d6f2394ec7a7", 00:10:13.994 "strip_size_kb": 0, 00:10:13.994 "state": "online", 00:10:13.994 "raid_level": "raid1", 00:10:13.994 "superblock": true, 00:10:13.994 "num_base_bdevs": 3, 00:10:13.994 "num_base_bdevs_discovered": 3, 00:10:13.994 "num_base_bdevs_operational": 3, 00:10:13.994 "base_bdevs_list": [ 00:10:13.994 { 00:10:13.994 "name": "BaseBdev1", 00:10:13.994 "uuid": "3dd9c99d-9709-4533-9251-d9be2e2e5a6b", 00:10:13.994 "is_configured": true, 00:10:13.994 "data_offset": 2048, 00:10:13.994 "data_size": 63488 00:10:13.994 }, 00:10:13.994 { 00:10:13.994 "name": "BaseBdev2", 00:10:13.994 "uuid": "b1e26909-d9b8-4672-a4f8-3603a2573744", 00:10:13.994 "is_configured": true, 00:10:13.994 "data_offset": 2048, 00:10:13.994 "data_size": 63488 00:10:13.994 }, 00:10:13.994 { 00:10:13.994 "name": "BaseBdev3", 00:10:13.994 "uuid": "b0f7cb48-f69e-4877-b62e-14a160a56ec4", 00:10:13.994 "is_configured": true, 00:10:13.994 "data_offset": 2048, 00:10:13.994 "data_size": 63488 00:10:13.994 } 00:10:13.994 ] 00:10:13.994 }' 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.994 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.251 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.252 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.252 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.252 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.252 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.252 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.510 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.510 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.510 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.510 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.510 [2024-11-20 09:22:39.711897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.510 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.510 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.510 "name": "Existed_Raid", 00:10:14.510 "aliases": [ 00:10:14.510 "af72899c-2525-40eb-be4f-d6f2394ec7a7" 00:10:14.510 ], 00:10:14.510 "product_name": "Raid Volume", 00:10:14.510 "block_size": 512, 00:10:14.510 "num_blocks": 63488, 00:10:14.510 "uuid": "af72899c-2525-40eb-be4f-d6f2394ec7a7", 00:10:14.510 "assigned_rate_limits": { 00:10:14.510 "rw_ios_per_sec": 0, 00:10:14.510 "rw_mbytes_per_sec": 0, 00:10:14.510 "r_mbytes_per_sec": 0, 00:10:14.510 "w_mbytes_per_sec": 0 00:10:14.510 }, 00:10:14.510 "claimed": false, 00:10:14.510 "zoned": false, 00:10:14.510 "supported_io_types": { 00:10:14.510 "read": true, 00:10:14.510 "write": true, 00:10:14.510 "unmap": false, 00:10:14.510 "flush": false, 00:10:14.510 "reset": true, 00:10:14.510 "nvme_admin": false, 00:10:14.510 "nvme_io": false, 00:10:14.510 "nvme_io_md": false, 00:10:14.510 "write_zeroes": true, 00:10:14.510 "zcopy": false, 00:10:14.510 "get_zone_info": false, 00:10:14.510 "zone_management": false, 00:10:14.510 "zone_append": false, 00:10:14.510 "compare": false, 00:10:14.510 "compare_and_write": false, 00:10:14.510 "abort": false, 00:10:14.510 "seek_hole": false, 00:10:14.510 "seek_data": false, 00:10:14.510 "copy": false, 00:10:14.510 "nvme_iov_md": false 00:10:14.510 }, 00:10:14.510 "memory_domains": [ 00:10:14.510 { 00:10:14.510 "dma_device_id": "system", 00:10:14.510 "dma_device_type": 1 00:10:14.510 }, 00:10:14.510 { 00:10:14.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.510 "dma_device_type": 2 00:10:14.510 }, 00:10:14.510 { 00:10:14.510 "dma_device_id": "system", 00:10:14.510 "dma_device_type": 1 00:10:14.510 }, 00:10:14.510 { 00:10:14.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.510 "dma_device_type": 2 00:10:14.510 }, 00:10:14.510 { 00:10:14.510 "dma_device_id": "system", 00:10:14.510 "dma_device_type": 1 00:10:14.510 }, 00:10:14.510 { 00:10:14.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.510 "dma_device_type": 2 00:10:14.510 } 00:10:14.510 ], 00:10:14.510 "driver_specific": { 00:10:14.510 "raid": { 00:10:14.510 "uuid": "af72899c-2525-40eb-be4f-d6f2394ec7a7", 00:10:14.510 "strip_size_kb": 0, 00:10:14.510 "state": "online", 00:10:14.510 "raid_level": "raid1", 00:10:14.510 "superblock": true, 00:10:14.510 "num_base_bdevs": 3, 00:10:14.510 "num_base_bdevs_discovered": 3, 00:10:14.510 "num_base_bdevs_operational": 3, 00:10:14.510 "base_bdevs_list": [ 00:10:14.510 { 00:10:14.510 "name": "BaseBdev1", 00:10:14.510 "uuid": "3dd9c99d-9709-4533-9251-d9be2e2e5a6b", 00:10:14.510 "is_configured": true, 00:10:14.510 "data_offset": 2048, 00:10:14.510 "data_size": 63488 00:10:14.510 }, 00:10:14.510 { 00:10:14.510 "name": "BaseBdev2", 00:10:14.510 "uuid": "b1e26909-d9b8-4672-a4f8-3603a2573744", 00:10:14.510 "is_configured": true, 00:10:14.510 "data_offset": 2048, 00:10:14.511 "data_size": 63488 00:10:14.511 }, 00:10:14.511 { 00:10:14.511 "name": "BaseBdev3", 00:10:14.511 "uuid": "b0f7cb48-f69e-4877-b62e-14a160a56ec4", 00:10:14.511 "is_configured": true, 00:10:14.511 "data_offset": 2048, 00:10:14.511 "data_size": 63488 00:10:14.511 } 00:10:14.511 ] 00:10:14.511 } 00:10:14.511 } 00:10:14.511 }' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:14.511 BaseBdev2 00:10:14.511 BaseBdev3' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.511 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.845 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.845 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.845 09:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.845 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.845 09:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.845 [2024-11-20 09:22:39.975206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.845 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.846 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.846 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.846 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.846 "name": "Existed_Raid", 00:10:14.846 "uuid": "af72899c-2525-40eb-be4f-d6f2394ec7a7", 00:10:14.846 "strip_size_kb": 0, 00:10:14.846 "state": "online", 00:10:14.846 "raid_level": "raid1", 00:10:14.846 "superblock": true, 00:10:14.846 "num_base_bdevs": 3, 00:10:14.846 "num_base_bdevs_discovered": 2, 00:10:14.846 "num_base_bdevs_operational": 2, 00:10:14.846 "base_bdevs_list": [ 00:10:14.846 { 00:10:14.846 "name": null, 00:10:14.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.846 "is_configured": false, 00:10:14.846 "data_offset": 0, 00:10:14.846 "data_size": 63488 00:10:14.846 }, 00:10:14.846 { 00:10:14.846 "name": "BaseBdev2", 00:10:14.846 "uuid": "b1e26909-d9b8-4672-a4f8-3603a2573744", 00:10:14.846 "is_configured": true, 00:10:14.846 "data_offset": 2048, 00:10:14.846 "data_size": 63488 00:10:14.846 }, 00:10:14.846 { 00:10:14.846 "name": "BaseBdev3", 00:10:14.846 "uuid": "b0f7cb48-f69e-4877-b62e-14a160a56ec4", 00:10:14.846 "is_configured": true, 00:10:14.846 "data_offset": 2048, 00:10:14.846 "data_size": 63488 00:10:14.846 } 00:10:14.846 ] 00:10:14.846 }' 00:10:14.846 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.846 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.415 [2024-11-20 09:22:40.666997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.415 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.415 [2024-11-20 09:22:40.846497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.415 [2024-11-20 09:22:40.846644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.674 [2024-11-20 09:22:40.959846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.674 [2024-11-20 09:22:40.959927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.674 [2024-11-20 09:22:40.959945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:15.674 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.674 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.674 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.674 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:15.674 09:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.674 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.674 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.674 09:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.674 BaseBdev2 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.674 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.674 [ 00:10:15.674 { 00:10:15.674 "name": "BaseBdev2", 00:10:15.674 "aliases": [ 00:10:15.674 "138ee36a-db65-472a-a42f-174dd098667a" 00:10:15.674 ], 00:10:15.674 "product_name": "Malloc disk", 00:10:15.674 "block_size": 512, 00:10:15.674 "num_blocks": 65536, 00:10:15.674 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:15.674 "assigned_rate_limits": { 00:10:15.674 "rw_ios_per_sec": 0, 00:10:15.674 "rw_mbytes_per_sec": 0, 00:10:15.674 "r_mbytes_per_sec": 0, 00:10:15.674 "w_mbytes_per_sec": 0 00:10:15.674 }, 00:10:15.674 "claimed": false, 00:10:15.674 "zoned": false, 00:10:15.674 "supported_io_types": { 00:10:15.674 "read": true, 00:10:15.675 "write": true, 00:10:15.675 "unmap": true, 00:10:15.675 "flush": true, 00:10:15.675 "reset": true, 00:10:15.675 "nvme_admin": false, 00:10:15.675 "nvme_io": false, 00:10:15.675 "nvme_io_md": false, 00:10:15.675 "write_zeroes": true, 00:10:15.675 "zcopy": true, 00:10:15.675 "get_zone_info": false, 00:10:15.675 "zone_management": false, 00:10:15.675 "zone_append": false, 00:10:15.675 "compare": false, 00:10:15.675 "compare_and_write": false, 00:10:15.675 "abort": true, 00:10:15.675 "seek_hole": false, 00:10:15.675 "seek_data": false, 00:10:15.675 "copy": true, 00:10:15.675 "nvme_iov_md": false 00:10:15.675 }, 00:10:15.675 "memory_domains": [ 00:10:15.675 { 00:10:15.675 "dma_device_id": "system", 00:10:15.675 "dma_device_type": 1 00:10:15.675 }, 00:10:15.675 { 00:10:15.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.675 "dma_device_type": 2 00:10:15.675 } 00:10:15.675 ], 00:10:15.675 "driver_specific": {} 00:10:15.675 } 00:10:15.675 ] 00:10:15.675 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.675 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.675 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.675 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.675 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.675 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.675 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.934 BaseBdev3 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.934 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.934 [ 00:10:15.934 { 00:10:15.934 "name": "BaseBdev3", 00:10:15.934 "aliases": [ 00:10:15.934 "d200d08e-a641-46b6-b6fa-590e1f0d09a6" 00:10:15.934 ], 00:10:15.934 "product_name": "Malloc disk", 00:10:15.934 "block_size": 512, 00:10:15.934 "num_blocks": 65536, 00:10:15.934 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:15.934 "assigned_rate_limits": { 00:10:15.934 "rw_ios_per_sec": 0, 00:10:15.934 "rw_mbytes_per_sec": 0, 00:10:15.934 "r_mbytes_per_sec": 0, 00:10:15.934 "w_mbytes_per_sec": 0 00:10:15.934 }, 00:10:15.934 "claimed": false, 00:10:15.934 "zoned": false, 00:10:15.934 "supported_io_types": { 00:10:15.935 "read": true, 00:10:15.935 "write": true, 00:10:15.935 "unmap": true, 00:10:15.935 "flush": true, 00:10:15.935 "reset": true, 00:10:15.935 "nvme_admin": false, 00:10:15.935 "nvme_io": false, 00:10:15.935 "nvme_io_md": false, 00:10:15.935 "write_zeroes": true, 00:10:15.935 "zcopy": true, 00:10:15.935 "get_zone_info": false, 00:10:15.935 "zone_management": false, 00:10:15.935 "zone_append": false, 00:10:15.935 "compare": false, 00:10:15.935 "compare_and_write": false, 00:10:15.935 "abort": true, 00:10:15.935 "seek_hole": false, 00:10:15.935 "seek_data": false, 00:10:15.935 "copy": true, 00:10:15.935 "nvme_iov_md": false 00:10:15.935 }, 00:10:15.935 "memory_domains": [ 00:10:15.935 { 00:10:15.935 "dma_device_id": "system", 00:10:15.935 "dma_device_type": 1 00:10:15.935 }, 00:10:15.935 { 00:10:15.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.935 "dma_device_type": 2 00:10:15.935 } 00:10:15.935 ], 00:10:15.935 "driver_specific": {} 00:10:15.935 } 00:10:15.935 ] 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.935 [2024-11-20 09:22:41.183970] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.935 [2024-11-20 09:22:41.184041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.935 [2024-11-20 09:22:41.184071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.935 [2024-11-20 09:22:41.186238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.935 "name": "Existed_Raid", 00:10:15.935 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:15.935 "strip_size_kb": 0, 00:10:15.935 "state": "configuring", 00:10:15.935 "raid_level": "raid1", 00:10:15.935 "superblock": true, 00:10:15.935 "num_base_bdevs": 3, 00:10:15.935 "num_base_bdevs_discovered": 2, 00:10:15.935 "num_base_bdevs_operational": 3, 00:10:15.935 "base_bdevs_list": [ 00:10:15.935 { 00:10:15.935 "name": "BaseBdev1", 00:10:15.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.935 "is_configured": false, 00:10:15.935 "data_offset": 0, 00:10:15.935 "data_size": 0 00:10:15.935 }, 00:10:15.935 { 00:10:15.935 "name": "BaseBdev2", 00:10:15.935 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:15.935 "is_configured": true, 00:10:15.935 "data_offset": 2048, 00:10:15.935 "data_size": 63488 00:10:15.935 }, 00:10:15.935 { 00:10:15.935 "name": "BaseBdev3", 00:10:15.935 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:15.935 "is_configured": true, 00:10:15.935 "data_offset": 2048, 00:10:15.935 "data_size": 63488 00:10:15.935 } 00:10:15.935 ] 00:10:15.935 }' 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.935 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.503 [2024-11-20 09:22:41.667167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.503 "name": "Existed_Raid", 00:10:16.503 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:16.503 "strip_size_kb": 0, 00:10:16.503 "state": "configuring", 00:10:16.503 "raid_level": "raid1", 00:10:16.503 "superblock": true, 00:10:16.503 "num_base_bdevs": 3, 00:10:16.503 "num_base_bdevs_discovered": 1, 00:10:16.503 "num_base_bdevs_operational": 3, 00:10:16.503 "base_bdevs_list": [ 00:10:16.503 { 00:10:16.503 "name": "BaseBdev1", 00:10:16.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.503 "is_configured": false, 00:10:16.503 "data_offset": 0, 00:10:16.503 "data_size": 0 00:10:16.503 }, 00:10:16.503 { 00:10:16.503 "name": null, 00:10:16.503 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:16.503 "is_configured": false, 00:10:16.503 "data_offset": 0, 00:10:16.503 "data_size": 63488 00:10:16.503 }, 00:10:16.503 { 00:10:16.503 "name": "BaseBdev3", 00:10:16.503 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:16.503 "is_configured": true, 00:10:16.503 "data_offset": 2048, 00:10:16.503 "data_size": 63488 00:10:16.503 } 00:10:16.503 ] 00:10:16.503 }' 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.503 09:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.763 [2024-11-20 09:22:42.208170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.763 BaseBdev1 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.763 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.023 [ 00:10:17.023 { 00:10:17.023 "name": "BaseBdev1", 00:10:17.023 "aliases": [ 00:10:17.023 "18eebb42-af8d-4980-be9c-b27cd1a467ca" 00:10:17.023 ], 00:10:17.023 "product_name": "Malloc disk", 00:10:17.023 "block_size": 512, 00:10:17.023 "num_blocks": 65536, 00:10:17.023 "uuid": "18eebb42-af8d-4980-be9c-b27cd1a467ca", 00:10:17.023 "assigned_rate_limits": { 00:10:17.023 "rw_ios_per_sec": 0, 00:10:17.023 "rw_mbytes_per_sec": 0, 00:10:17.023 "r_mbytes_per_sec": 0, 00:10:17.023 "w_mbytes_per_sec": 0 00:10:17.023 }, 00:10:17.023 "claimed": true, 00:10:17.023 "claim_type": "exclusive_write", 00:10:17.023 "zoned": false, 00:10:17.023 "supported_io_types": { 00:10:17.023 "read": true, 00:10:17.023 "write": true, 00:10:17.023 "unmap": true, 00:10:17.023 "flush": true, 00:10:17.023 "reset": true, 00:10:17.023 "nvme_admin": false, 00:10:17.023 "nvme_io": false, 00:10:17.023 "nvme_io_md": false, 00:10:17.023 "write_zeroes": true, 00:10:17.023 "zcopy": true, 00:10:17.023 "get_zone_info": false, 00:10:17.023 "zone_management": false, 00:10:17.023 "zone_append": false, 00:10:17.023 "compare": false, 00:10:17.023 "compare_and_write": false, 00:10:17.023 "abort": true, 00:10:17.023 "seek_hole": false, 00:10:17.023 "seek_data": false, 00:10:17.023 "copy": true, 00:10:17.023 "nvme_iov_md": false 00:10:17.023 }, 00:10:17.023 "memory_domains": [ 00:10:17.023 { 00:10:17.023 "dma_device_id": "system", 00:10:17.023 "dma_device_type": 1 00:10:17.023 }, 00:10:17.023 { 00:10:17.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.023 "dma_device_type": 2 00:10:17.023 } 00:10:17.023 ], 00:10:17.023 "driver_specific": {} 00:10:17.023 } 00:10:17.023 ] 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.023 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.024 "name": "Existed_Raid", 00:10:17.024 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:17.024 "strip_size_kb": 0, 00:10:17.024 "state": "configuring", 00:10:17.024 "raid_level": "raid1", 00:10:17.024 "superblock": true, 00:10:17.024 "num_base_bdevs": 3, 00:10:17.024 "num_base_bdevs_discovered": 2, 00:10:17.024 "num_base_bdevs_operational": 3, 00:10:17.024 "base_bdevs_list": [ 00:10:17.024 { 00:10:17.024 "name": "BaseBdev1", 00:10:17.024 "uuid": "18eebb42-af8d-4980-be9c-b27cd1a467ca", 00:10:17.024 "is_configured": true, 00:10:17.024 "data_offset": 2048, 00:10:17.024 "data_size": 63488 00:10:17.024 }, 00:10:17.024 { 00:10:17.024 "name": null, 00:10:17.024 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:17.024 "is_configured": false, 00:10:17.024 "data_offset": 0, 00:10:17.024 "data_size": 63488 00:10:17.024 }, 00:10:17.024 { 00:10:17.024 "name": "BaseBdev3", 00:10:17.024 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:17.024 "is_configured": true, 00:10:17.024 "data_offset": 2048, 00:10:17.024 "data_size": 63488 00:10:17.024 } 00:10:17.024 ] 00:10:17.024 }' 00:10:17.024 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.024 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.284 [2024-11-20 09:22:42.731939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.284 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.543 "name": "Existed_Raid", 00:10:17.543 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:17.543 "strip_size_kb": 0, 00:10:17.543 "state": "configuring", 00:10:17.543 "raid_level": "raid1", 00:10:17.543 "superblock": true, 00:10:17.543 "num_base_bdevs": 3, 00:10:17.543 "num_base_bdevs_discovered": 1, 00:10:17.543 "num_base_bdevs_operational": 3, 00:10:17.543 "base_bdevs_list": [ 00:10:17.543 { 00:10:17.543 "name": "BaseBdev1", 00:10:17.543 "uuid": "18eebb42-af8d-4980-be9c-b27cd1a467ca", 00:10:17.543 "is_configured": true, 00:10:17.543 "data_offset": 2048, 00:10:17.543 "data_size": 63488 00:10:17.543 }, 00:10:17.543 { 00:10:17.543 "name": null, 00:10:17.543 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:17.543 "is_configured": false, 00:10:17.543 "data_offset": 0, 00:10:17.543 "data_size": 63488 00:10:17.543 }, 00:10:17.543 { 00:10:17.543 "name": null, 00:10:17.543 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:17.543 "is_configured": false, 00:10:17.543 "data_offset": 0, 00:10:17.543 "data_size": 63488 00:10:17.543 } 00:10:17.543 ] 00:10:17.543 }' 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.543 09:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.802 [2024-11-20 09:22:43.247367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.802 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.061 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.061 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.061 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.061 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.061 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.061 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.061 "name": "Existed_Raid", 00:10:18.061 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:18.061 "strip_size_kb": 0, 00:10:18.061 "state": "configuring", 00:10:18.061 "raid_level": "raid1", 00:10:18.061 "superblock": true, 00:10:18.061 "num_base_bdevs": 3, 00:10:18.061 "num_base_bdevs_discovered": 2, 00:10:18.061 "num_base_bdevs_operational": 3, 00:10:18.061 "base_bdevs_list": [ 00:10:18.061 { 00:10:18.061 "name": "BaseBdev1", 00:10:18.061 "uuid": "18eebb42-af8d-4980-be9c-b27cd1a467ca", 00:10:18.061 "is_configured": true, 00:10:18.061 "data_offset": 2048, 00:10:18.061 "data_size": 63488 00:10:18.061 }, 00:10:18.061 { 00:10:18.061 "name": null, 00:10:18.061 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:18.061 "is_configured": false, 00:10:18.061 "data_offset": 0, 00:10:18.061 "data_size": 63488 00:10:18.061 }, 00:10:18.061 { 00:10:18.061 "name": "BaseBdev3", 00:10:18.061 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:18.061 "is_configured": true, 00:10:18.061 "data_offset": 2048, 00:10:18.061 "data_size": 63488 00:10:18.061 } 00:10:18.061 ] 00:10:18.061 }' 00:10:18.061 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.061 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.321 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.321 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.321 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.321 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.321 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.581 [2024-11-20 09:22:43.802507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.581 "name": "Existed_Raid", 00:10:18.581 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:18.581 "strip_size_kb": 0, 00:10:18.581 "state": "configuring", 00:10:18.581 "raid_level": "raid1", 00:10:18.581 "superblock": true, 00:10:18.581 "num_base_bdevs": 3, 00:10:18.581 "num_base_bdevs_discovered": 1, 00:10:18.581 "num_base_bdevs_operational": 3, 00:10:18.581 "base_bdevs_list": [ 00:10:18.581 { 00:10:18.581 "name": null, 00:10:18.581 "uuid": "18eebb42-af8d-4980-be9c-b27cd1a467ca", 00:10:18.581 "is_configured": false, 00:10:18.581 "data_offset": 0, 00:10:18.581 "data_size": 63488 00:10:18.581 }, 00:10:18.581 { 00:10:18.581 "name": null, 00:10:18.581 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:18.581 "is_configured": false, 00:10:18.581 "data_offset": 0, 00:10:18.581 "data_size": 63488 00:10:18.581 }, 00:10:18.581 { 00:10:18.581 "name": "BaseBdev3", 00:10:18.581 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:18.581 "is_configured": true, 00:10:18.581 "data_offset": 2048, 00:10:18.581 "data_size": 63488 00:10:18.581 } 00:10:18.581 ] 00:10:18.581 }' 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.581 09:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.159 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.159 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.159 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.159 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.159 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.160 [2024-11-20 09:22:44.402508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.160 "name": "Existed_Raid", 00:10:19.160 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:19.160 "strip_size_kb": 0, 00:10:19.160 "state": "configuring", 00:10:19.160 "raid_level": "raid1", 00:10:19.160 "superblock": true, 00:10:19.160 "num_base_bdevs": 3, 00:10:19.160 "num_base_bdevs_discovered": 2, 00:10:19.160 "num_base_bdevs_operational": 3, 00:10:19.160 "base_bdevs_list": [ 00:10:19.160 { 00:10:19.160 "name": null, 00:10:19.160 "uuid": "18eebb42-af8d-4980-be9c-b27cd1a467ca", 00:10:19.160 "is_configured": false, 00:10:19.160 "data_offset": 0, 00:10:19.160 "data_size": 63488 00:10:19.160 }, 00:10:19.160 { 00:10:19.160 "name": "BaseBdev2", 00:10:19.160 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:19.160 "is_configured": true, 00:10:19.160 "data_offset": 2048, 00:10:19.160 "data_size": 63488 00:10:19.160 }, 00:10:19.160 { 00:10:19.160 "name": "BaseBdev3", 00:10:19.160 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:19.160 "is_configured": true, 00:10:19.160 "data_offset": 2048, 00:10:19.160 "data_size": 63488 00:10:19.160 } 00:10:19.160 ] 00:10:19.160 }' 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.160 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.420 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.420 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 18eebb42-af8d-4980-be9c-b27cd1a467ca 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.680 09:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 [2024-11-20 09:22:45.001181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:19.680 [2024-11-20 09:22:45.001515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:19.680 [2024-11-20 09:22:45.001534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:19.680 [2024-11-20 09:22:45.001847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:19.680 [2024-11-20 09:22:45.002054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:19.680 [2024-11-20 09:22:45.002072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:19.680 NewBaseBdev 00:10:19.680 [2024-11-20 09:22:45.002255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.680 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 [ 00:10:19.680 { 00:10:19.680 "name": "NewBaseBdev", 00:10:19.680 "aliases": [ 00:10:19.680 "18eebb42-af8d-4980-be9c-b27cd1a467ca" 00:10:19.680 ], 00:10:19.680 "product_name": "Malloc disk", 00:10:19.680 "block_size": 512, 00:10:19.680 "num_blocks": 65536, 00:10:19.680 "uuid": "18eebb42-af8d-4980-be9c-b27cd1a467ca", 00:10:19.680 "assigned_rate_limits": { 00:10:19.680 "rw_ios_per_sec": 0, 00:10:19.680 "rw_mbytes_per_sec": 0, 00:10:19.680 "r_mbytes_per_sec": 0, 00:10:19.680 "w_mbytes_per_sec": 0 00:10:19.680 }, 00:10:19.680 "claimed": true, 00:10:19.680 "claim_type": "exclusive_write", 00:10:19.680 "zoned": false, 00:10:19.680 "supported_io_types": { 00:10:19.680 "read": true, 00:10:19.680 "write": true, 00:10:19.680 "unmap": true, 00:10:19.680 "flush": true, 00:10:19.680 "reset": true, 00:10:19.680 "nvme_admin": false, 00:10:19.680 "nvme_io": false, 00:10:19.680 "nvme_io_md": false, 00:10:19.680 "write_zeroes": true, 00:10:19.680 "zcopy": true, 00:10:19.680 "get_zone_info": false, 00:10:19.680 "zone_management": false, 00:10:19.680 "zone_append": false, 00:10:19.680 "compare": false, 00:10:19.680 "compare_and_write": false, 00:10:19.680 "abort": true, 00:10:19.680 "seek_hole": false, 00:10:19.680 "seek_data": false, 00:10:19.680 "copy": true, 00:10:19.680 "nvme_iov_md": false 00:10:19.680 }, 00:10:19.680 "memory_domains": [ 00:10:19.680 { 00:10:19.680 "dma_device_id": "system", 00:10:19.680 "dma_device_type": 1 00:10:19.680 }, 00:10:19.680 { 00:10:19.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.681 "dma_device_type": 2 00:10:19.681 } 00:10:19.681 ], 00:10:19.681 "driver_specific": {} 00:10:19.681 } 00:10:19.681 ] 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.681 "name": "Existed_Raid", 00:10:19.681 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:19.681 "strip_size_kb": 0, 00:10:19.681 "state": "online", 00:10:19.681 "raid_level": "raid1", 00:10:19.681 "superblock": true, 00:10:19.681 "num_base_bdevs": 3, 00:10:19.681 "num_base_bdevs_discovered": 3, 00:10:19.681 "num_base_bdevs_operational": 3, 00:10:19.681 "base_bdevs_list": [ 00:10:19.681 { 00:10:19.681 "name": "NewBaseBdev", 00:10:19.681 "uuid": "18eebb42-af8d-4980-be9c-b27cd1a467ca", 00:10:19.681 "is_configured": true, 00:10:19.681 "data_offset": 2048, 00:10:19.681 "data_size": 63488 00:10:19.681 }, 00:10:19.681 { 00:10:19.681 "name": "BaseBdev2", 00:10:19.681 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:19.681 "is_configured": true, 00:10:19.681 "data_offset": 2048, 00:10:19.681 "data_size": 63488 00:10:19.681 }, 00:10:19.681 { 00:10:19.681 "name": "BaseBdev3", 00:10:19.681 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:19.681 "is_configured": true, 00:10:19.681 "data_offset": 2048, 00:10:19.681 "data_size": 63488 00:10:19.681 } 00:10:19.681 ] 00:10:19.681 }' 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.681 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.250 [2024-11-20 09:22:45.496979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.250 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.250 "name": "Existed_Raid", 00:10:20.250 "aliases": [ 00:10:20.250 "9988c2dd-8f66-40e9-9a59-3fa7094e6b62" 00:10:20.250 ], 00:10:20.250 "product_name": "Raid Volume", 00:10:20.250 "block_size": 512, 00:10:20.250 "num_blocks": 63488, 00:10:20.250 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:20.250 "assigned_rate_limits": { 00:10:20.250 "rw_ios_per_sec": 0, 00:10:20.250 "rw_mbytes_per_sec": 0, 00:10:20.250 "r_mbytes_per_sec": 0, 00:10:20.250 "w_mbytes_per_sec": 0 00:10:20.250 }, 00:10:20.250 "claimed": false, 00:10:20.250 "zoned": false, 00:10:20.250 "supported_io_types": { 00:10:20.250 "read": true, 00:10:20.250 "write": true, 00:10:20.250 "unmap": false, 00:10:20.250 "flush": false, 00:10:20.250 "reset": true, 00:10:20.250 "nvme_admin": false, 00:10:20.250 "nvme_io": false, 00:10:20.250 "nvme_io_md": false, 00:10:20.250 "write_zeroes": true, 00:10:20.250 "zcopy": false, 00:10:20.250 "get_zone_info": false, 00:10:20.250 "zone_management": false, 00:10:20.250 "zone_append": false, 00:10:20.250 "compare": false, 00:10:20.250 "compare_and_write": false, 00:10:20.250 "abort": false, 00:10:20.250 "seek_hole": false, 00:10:20.250 "seek_data": false, 00:10:20.250 "copy": false, 00:10:20.250 "nvme_iov_md": false 00:10:20.250 }, 00:10:20.250 "memory_domains": [ 00:10:20.250 { 00:10:20.250 "dma_device_id": "system", 00:10:20.250 "dma_device_type": 1 00:10:20.250 }, 00:10:20.250 { 00:10:20.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.250 "dma_device_type": 2 00:10:20.250 }, 00:10:20.250 { 00:10:20.250 "dma_device_id": "system", 00:10:20.250 "dma_device_type": 1 00:10:20.250 }, 00:10:20.250 { 00:10:20.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.250 "dma_device_type": 2 00:10:20.250 }, 00:10:20.250 { 00:10:20.250 "dma_device_id": "system", 00:10:20.250 "dma_device_type": 1 00:10:20.250 }, 00:10:20.250 { 00:10:20.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.250 "dma_device_type": 2 00:10:20.250 } 00:10:20.250 ], 00:10:20.250 "driver_specific": { 00:10:20.250 "raid": { 00:10:20.250 "uuid": "9988c2dd-8f66-40e9-9a59-3fa7094e6b62", 00:10:20.250 "strip_size_kb": 0, 00:10:20.250 "state": "online", 00:10:20.250 "raid_level": "raid1", 00:10:20.250 "superblock": true, 00:10:20.250 "num_base_bdevs": 3, 00:10:20.250 "num_base_bdevs_discovered": 3, 00:10:20.250 "num_base_bdevs_operational": 3, 00:10:20.250 "base_bdevs_list": [ 00:10:20.250 { 00:10:20.250 "name": "NewBaseBdev", 00:10:20.250 "uuid": "18eebb42-af8d-4980-be9c-b27cd1a467ca", 00:10:20.250 "is_configured": true, 00:10:20.250 "data_offset": 2048, 00:10:20.251 "data_size": 63488 00:10:20.251 }, 00:10:20.251 { 00:10:20.251 "name": "BaseBdev2", 00:10:20.251 "uuid": "138ee36a-db65-472a-a42f-174dd098667a", 00:10:20.251 "is_configured": true, 00:10:20.251 "data_offset": 2048, 00:10:20.251 "data_size": 63488 00:10:20.251 }, 00:10:20.251 { 00:10:20.251 "name": "BaseBdev3", 00:10:20.251 "uuid": "d200d08e-a641-46b6-b6fa-590e1f0d09a6", 00:10:20.251 "is_configured": true, 00:10:20.251 "data_offset": 2048, 00:10:20.251 "data_size": 63488 00:10:20.251 } 00:10:20.251 ] 00:10:20.251 } 00:10:20.251 } 00:10:20.251 }' 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:20.251 BaseBdev2 00:10:20.251 BaseBdev3' 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.251 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.510 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.510 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.510 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.510 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.510 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.510 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.510 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.511 [2024-11-20 09:22:45.780090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:20.511 [2024-11-20 09:22:45.780147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.511 [2024-11-20 09:22:45.780273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.511 [2024-11-20 09:22:45.780686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.511 [2024-11-20 09:22:45.780723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68331 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68331 ']' 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68331 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68331 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.511 killing process with pid 68331 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68331' 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68331 00:10:20.511 [2024-11-20 09:22:45.821792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.511 09:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68331 00:10:20.770 [2024-11-20 09:22:46.153223] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.167 09:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:22.167 00:10:22.167 real 0m11.239s 00:10:22.167 user 0m17.867s 00:10:22.167 sys 0m1.834s 00:10:22.167 09:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.167 09:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.167 ************************************ 00:10:22.167 END TEST raid_state_function_test_sb 00:10:22.167 ************************************ 00:10:22.167 09:22:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:22.167 09:22:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:22.167 09:22:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.167 09:22:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.167 ************************************ 00:10:22.167 START TEST raid_superblock_test 00:10:22.167 ************************************ 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68958 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68958 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68958 ']' 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.167 09:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.167 [2024-11-20 09:22:47.515716] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:10:22.167 [2024-11-20 09:22:47.515874] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68958 ] 00:10:22.425 [2024-11-20 09:22:47.695044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.425 [2024-11-20 09:22:47.824344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.684 [2024-11-20 09:22:48.048220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.684 [2024-11-20 09:22:48.048307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.943 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.202 malloc1 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.202 [2024-11-20 09:22:48.443661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:23.202 [2024-11-20 09:22:48.443752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.202 [2024-11-20 09:22:48.443780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:23.202 [2024-11-20 09:22:48.443791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.202 [2024-11-20 09:22:48.446125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.202 [2024-11-20 09:22:48.446164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:23.202 pt1 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.202 malloc2 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.202 [2024-11-20 09:22:48.504812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.202 [2024-11-20 09:22:48.504889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.202 [2024-11-20 09:22:48.504921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:23.202 [2024-11-20 09:22:48.504931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.202 [2024-11-20 09:22:48.507357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.202 [2024-11-20 09:22:48.507396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.202 pt2 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.202 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.203 malloc3 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.203 [2024-11-20 09:22:48.576890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:23.203 [2024-11-20 09:22:48.576959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.203 [2024-11-20 09:22:48.576985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:23.203 [2024-11-20 09:22:48.576995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.203 [2024-11-20 09:22:48.579351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.203 [2024-11-20 09:22:48.579395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:23.203 pt3 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.203 [2024-11-20 09:22:48.588922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:23.203 [2024-11-20 09:22:48.590952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:23.203 [2024-11-20 09:22:48.591025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:23.203 [2024-11-20 09:22:48.591190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:23.203 [2024-11-20 09:22:48.591232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.203 [2024-11-20 09:22:48.591535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:23.203 [2024-11-20 09:22:48.591748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:23.203 [2024-11-20 09:22:48.591779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:23.203 [2024-11-20 09:22:48.591970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.203 "name": "raid_bdev1", 00:10:23.203 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:23.203 "strip_size_kb": 0, 00:10:23.203 "state": "online", 00:10:23.203 "raid_level": "raid1", 00:10:23.203 "superblock": true, 00:10:23.203 "num_base_bdevs": 3, 00:10:23.203 "num_base_bdevs_discovered": 3, 00:10:23.203 "num_base_bdevs_operational": 3, 00:10:23.203 "base_bdevs_list": [ 00:10:23.203 { 00:10:23.203 "name": "pt1", 00:10:23.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.203 "is_configured": true, 00:10:23.203 "data_offset": 2048, 00:10:23.203 "data_size": 63488 00:10:23.203 }, 00:10:23.203 { 00:10:23.203 "name": "pt2", 00:10:23.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.203 "is_configured": true, 00:10:23.203 "data_offset": 2048, 00:10:23.203 "data_size": 63488 00:10:23.203 }, 00:10:23.203 { 00:10:23.203 "name": "pt3", 00:10:23.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.203 "is_configured": true, 00:10:23.203 "data_offset": 2048, 00:10:23.203 "data_size": 63488 00:10:23.203 } 00:10:23.203 ] 00:10:23.203 }' 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.203 09:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.771 [2024-11-20 09:22:49.064464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.771 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.771 "name": "raid_bdev1", 00:10:23.771 "aliases": [ 00:10:23.771 "cedd8bcb-c601-4cb1-bb0e-e747f401be7e" 00:10:23.771 ], 00:10:23.771 "product_name": "Raid Volume", 00:10:23.771 "block_size": 512, 00:10:23.771 "num_blocks": 63488, 00:10:23.771 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:23.771 "assigned_rate_limits": { 00:10:23.771 "rw_ios_per_sec": 0, 00:10:23.771 "rw_mbytes_per_sec": 0, 00:10:23.771 "r_mbytes_per_sec": 0, 00:10:23.771 "w_mbytes_per_sec": 0 00:10:23.771 }, 00:10:23.771 "claimed": false, 00:10:23.771 "zoned": false, 00:10:23.771 "supported_io_types": { 00:10:23.771 "read": true, 00:10:23.771 "write": true, 00:10:23.771 "unmap": false, 00:10:23.771 "flush": false, 00:10:23.771 "reset": true, 00:10:23.771 "nvme_admin": false, 00:10:23.771 "nvme_io": false, 00:10:23.771 "nvme_io_md": false, 00:10:23.771 "write_zeroes": true, 00:10:23.771 "zcopy": false, 00:10:23.771 "get_zone_info": false, 00:10:23.771 "zone_management": false, 00:10:23.771 "zone_append": false, 00:10:23.771 "compare": false, 00:10:23.771 "compare_and_write": false, 00:10:23.771 "abort": false, 00:10:23.771 "seek_hole": false, 00:10:23.771 "seek_data": false, 00:10:23.771 "copy": false, 00:10:23.771 "nvme_iov_md": false 00:10:23.771 }, 00:10:23.771 "memory_domains": [ 00:10:23.771 { 00:10:23.771 "dma_device_id": "system", 00:10:23.771 "dma_device_type": 1 00:10:23.771 }, 00:10:23.771 { 00:10:23.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.771 "dma_device_type": 2 00:10:23.771 }, 00:10:23.771 { 00:10:23.771 "dma_device_id": "system", 00:10:23.771 "dma_device_type": 1 00:10:23.771 }, 00:10:23.771 { 00:10:23.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.771 "dma_device_type": 2 00:10:23.771 }, 00:10:23.772 { 00:10:23.772 "dma_device_id": "system", 00:10:23.772 "dma_device_type": 1 00:10:23.772 }, 00:10:23.772 { 00:10:23.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.772 "dma_device_type": 2 00:10:23.772 } 00:10:23.772 ], 00:10:23.772 "driver_specific": { 00:10:23.772 "raid": { 00:10:23.772 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:23.772 "strip_size_kb": 0, 00:10:23.772 "state": "online", 00:10:23.772 "raid_level": "raid1", 00:10:23.772 "superblock": true, 00:10:23.772 "num_base_bdevs": 3, 00:10:23.772 "num_base_bdevs_discovered": 3, 00:10:23.772 "num_base_bdevs_operational": 3, 00:10:23.772 "base_bdevs_list": [ 00:10:23.772 { 00:10:23.772 "name": "pt1", 00:10:23.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.772 "is_configured": true, 00:10:23.772 "data_offset": 2048, 00:10:23.772 "data_size": 63488 00:10:23.772 }, 00:10:23.772 { 00:10:23.772 "name": "pt2", 00:10:23.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.772 "is_configured": true, 00:10:23.772 "data_offset": 2048, 00:10:23.772 "data_size": 63488 00:10:23.772 }, 00:10:23.772 { 00:10:23.772 "name": "pt3", 00:10:23.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.772 "is_configured": true, 00:10:23.772 "data_offset": 2048, 00:10:23.772 "data_size": 63488 00:10:23.772 } 00:10:23.772 ] 00:10:23.772 } 00:10:23.772 } 00:10:23.772 }' 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:23.772 pt2 00:10:23.772 pt3' 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.772 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.031 [2024-11-20 09:22:49.351988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cedd8bcb-c601-4cb1-bb0e-e747f401be7e 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cedd8bcb-c601-4cb1-bb0e-e747f401be7e ']' 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.031 [2024-11-20 09:22:49.399604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.031 [2024-11-20 09:22:49.399649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.031 [2024-11-20 09:22:49.399764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.031 [2024-11-20 09:22:49.399856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.031 [2024-11-20 09:22:49.399869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.031 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.032 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 [2024-11-20 09:22:49.555473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:24.292 [2024-11-20 09:22:49.557675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:24.292 [2024-11-20 09:22:49.557747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:24.292 [2024-11-20 09:22:49.557807] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:24.292 [2024-11-20 09:22:49.557873] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:24.292 [2024-11-20 09:22:49.557896] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:24.292 [2024-11-20 09:22:49.557916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.292 [2024-11-20 09:22:49.557927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:24.292 request: 00:10:24.292 { 00:10:24.292 "name": "raid_bdev1", 00:10:24.292 "raid_level": "raid1", 00:10:24.292 "base_bdevs": [ 00:10:24.292 "malloc1", 00:10:24.292 "malloc2", 00:10:24.292 "malloc3" 00:10:24.292 ], 00:10:24.292 "superblock": false, 00:10:24.292 "method": "bdev_raid_create", 00:10:24.292 "req_id": 1 00:10:24.292 } 00:10:24.292 Got JSON-RPC error response 00:10:24.292 response: 00:10:24.292 { 00:10:24.292 "code": -17, 00:10:24.292 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:24.292 } 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 [2024-11-20 09:22:49.615284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.292 [2024-11-20 09:22:49.615378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.292 [2024-11-20 09:22:49.615410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:24.292 [2024-11-20 09:22:49.615421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.292 [2024-11-20 09:22:49.618139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.292 [2024-11-20 09:22:49.618201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.292 [2024-11-20 09:22:49.618314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:24.292 [2024-11-20 09:22:49.618376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.292 pt1 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.292 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.292 "name": "raid_bdev1", 00:10:24.292 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:24.292 "strip_size_kb": 0, 00:10:24.292 "state": "configuring", 00:10:24.292 "raid_level": "raid1", 00:10:24.292 "superblock": true, 00:10:24.292 "num_base_bdevs": 3, 00:10:24.292 "num_base_bdevs_discovered": 1, 00:10:24.292 "num_base_bdevs_operational": 3, 00:10:24.292 "base_bdevs_list": [ 00:10:24.292 { 00:10:24.293 "name": "pt1", 00:10:24.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.293 "is_configured": true, 00:10:24.293 "data_offset": 2048, 00:10:24.293 "data_size": 63488 00:10:24.293 }, 00:10:24.293 { 00:10:24.293 "name": null, 00:10:24.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.293 "is_configured": false, 00:10:24.293 "data_offset": 2048, 00:10:24.293 "data_size": 63488 00:10:24.293 }, 00:10:24.293 { 00:10:24.293 "name": null, 00:10:24.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.293 "is_configured": false, 00:10:24.293 "data_offset": 2048, 00:10:24.293 "data_size": 63488 00:10:24.293 } 00:10:24.293 ] 00:10:24.293 }' 00:10:24.293 09:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.293 09:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.862 [2024-11-20 09:22:50.086576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.862 [2024-11-20 09:22:50.086670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.862 [2024-11-20 09:22:50.086697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:24.862 [2024-11-20 09:22:50.086708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.862 [2024-11-20 09:22:50.087238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.862 [2024-11-20 09:22:50.087269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.862 [2024-11-20 09:22:50.087375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.862 [2024-11-20 09:22:50.087406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.862 pt2 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.862 [2024-11-20 09:22:50.094596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.862 "name": "raid_bdev1", 00:10:24.862 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:24.862 "strip_size_kb": 0, 00:10:24.862 "state": "configuring", 00:10:24.862 "raid_level": "raid1", 00:10:24.862 "superblock": true, 00:10:24.862 "num_base_bdevs": 3, 00:10:24.862 "num_base_bdevs_discovered": 1, 00:10:24.862 "num_base_bdevs_operational": 3, 00:10:24.862 "base_bdevs_list": [ 00:10:24.862 { 00:10:24.862 "name": "pt1", 00:10:24.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.862 "is_configured": true, 00:10:24.862 "data_offset": 2048, 00:10:24.862 "data_size": 63488 00:10:24.862 }, 00:10:24.862 { 00:10:24.862 "name": null, 00:10:24.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.862 "is_configured": false, 00:10:24.862 "data_offset": 0, 00:10:24.862 "data_size": 63488 00:10:24.862 }, 00:10:24.862 { 00:10:24.862 "name": null, 00:10:24.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.862 "is_configured": false, 00:10:24.862 "data_offset": 2048, 00:10:24.862 "data_size": 63488 00:10:24.862 } 00:10:24.862 ] 00:10:24.862 }' 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.862 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.122 [2024-11-20 09:22:50.545761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.122 [2024-11-20 09:22:50.545852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.122 [2024-11-20 09:22:50.545876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:25.122 [2024-11-20 09:22:50.545889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.122 [2024-11-20 09:22:50.546421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.122 [2024-11-20 09:22:50.546493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.122 [2024-11-20 09:22:50.546593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.122 [2024-11-20 09:22:50.546647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.122 pt2 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.122 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.123 [2024-11-20 09:22:50.557756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:25.123 [2024-11-20 09:22:50.557838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.123 [2024-11-20 09:22:50.557866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.123 [2024-11-20 09:22:50.557882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.123 [2024-11-20 09:22:50.558392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.123 [2024-11-20 09:22:50.558450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:25.123 [2024-11-20 09:22:50.558550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:25.123 [2024-11-20 09:22:50.558582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:25.123 [2024-11-20 09:22:50.558744] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:25.123 [2024-11-20 09:22:50.558768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:25.123 [2024-11-20 09:22:50.559042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:25.123 [2024-11-20 09:22:50.559237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:25.123 [2024-11-20 09:22:50.559257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:25.123 [2024-11-20 09:22:50.559427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.123 pt3 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.123 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.382 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.382 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.382 "name": "raid_bdev1", 00:10:25.382 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:25.382 "strip_size_kb": 0, 00:10:25.382 "state": "online", 00:10:25.382 "raid_level": "raid1", 00:10:25.382 "superblock": true, 00:10:25.382 "num_base_bdevs": 3, 00:10:25.382 "num_base_bdevs_discovered": 3, 00:10:25.382 "num_base_bdevs_operational": 3, 00:10:25.382 "base_bdevs_list": [ 00:10:25.382 { 00:10:25.382 "name": "pt1", 00:10:25.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.382 "is_configured": true, 00:10:25.382 "data_offset": 2048, 00:10:25.382 "data_size": 63488 00:10:25.382 }, 00:10:25.382 { 00:10:25.382 "name": "pt2", 00:10:25.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.382 "is_configured": true, 00:10:25.382 "data_offset": 2048, 00:10:25.382 "data_size": 63488 00:10:25.382 }, 00:10:25.382 { 00:10:25.382 "name": "pt3", 00:10:25.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.382 "is_configured": true, 00:10:25.382 "data_offset": 2048, 00:10:25.382 "data_size": 63488 00:10:25.382 } 00:10:25.382 ] 00:10:25.382 }' 00:10:25.382 09:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.382 09:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.641 [2024-11-20 09:22:51.061313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.641 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.900 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.900 "name": "raid_bdev1", 00:10:25.900 "aliases": [ 00:10:25.900 "cedd8bcb-c601-4cb1-bb0e-e747f401be7e" 00:10:25.900 ], 00:10:25.900 "product_name": "Raid Volume", 00:10:25.900 "block_size": 512, 00:10:25.900 "num_blocks": 63488, 00:10:25.900 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:25.900 "assigned_rate_limits": { 00:10:25.900 "rw_ios_per_sec": 0, 00:10:25.900 "rw_mbytes_per_sec": 0, 00:10:25.900 "r_mbytes_per_sec": 0, 00:10:25.900 "w_mbytes_per_sec": 0 00:10:25.900 }, 00:10:25.900 "claimed": false, 00:10:25.900 "zoned": false, 00:10:25.900 "supported_io_types": { 00:10:25.900 "read": true, 00:10:25.900 "write": true, 00:10:25.900 "unmap": false, 00:10:25.900 "flush": false, 00:10:25.900 "reset": true, 00:10:25.900 "nvme_admin": false, 00:10:25.900 "nvme_io": false, 00:10:25.900 "nvme_io_md": false, 00:10:25.900 "write_zeroes": true, 00:10:25.900 "zcopy": false, 00:10:25.900 "get_zone_info": false, 00:10:25.900 "zone_management": false, 00:10:25.900 "zone_append": false, 00:10:25.900 "compare": false, 00:10:25.900 "compare_and_write": false, 00:10:25.900 "abort": false, 00:10:25.900 "seek_hole": false, 00:10:25.900 "seek_data": false, 00:10:25.900 "copy": false, 00:10:25.900 "nvme_iov_md": false 00:10:25.900 }, 00:10:25.900 "memory_domains": [ 00:10:25.900 { 00:10:25.900 "dma_device_id": "system", 00:10:25.900 "dma_device_type": 1 00:10:25.900 }, 00:10:25.900 { 00:10:25.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.900 "dma_device_type": 2 00:10:25.900 }, 00:10:25.900 { 00:10:25.900 "dma_device_id": "system", 00:10:25.900 "dma_device_type": 1 00:10:25.900 }, 00:10:25.900 { 00:10:25.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.900 "dma_device_type": 2 00:10:25.900 }, 00:10:25.900 { 00:10:25.901 "dma_device_id": "system", 00:10:25.901 "dma_device_type": 1 00:10:25.901 }, 00:10:25.901 { 00:10:25.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.901 "dma_device_type": 2 00:10:25.901 } 00:10:25.901 ], 00:10:25.901 "driver_specific": { 00:10:25.901 "raid": { 00:10:25.901 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:25.901 "strip_size_kb": 0, 00:10:25.901 "state": "online", 00:10:25.901 "raid_level": "raid1", 00:10:25.901 "superblock": true, 00:10:25.901 "num_base_bdevs": 3, 00:10:25.901 "num_base_bdevs_discovered": 3, 00:10:25.901 "num_base_bdevs_operational": 3, 00:10:25.901 "base_bdevs_list": [ 00:10:25.901 { 00:10:25.901 "name": "pt1", 00:10:25.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.901 "is_configured": true, 00:10:25.901 "data_offset": 2048, 00:10:25.901 "data_size": 63488 00:10:25.901 }, 00:10:25.901 { 00:10:25.901 "name": "pt2", 00:10:25.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.901 "is_configured": true, 00:10:25.901 "data_offset": 2048, 00:10:25.901 "data_size": 63488 00:10:25.901 }, 00:10:25.901 { 00:10:25.901 "name": "pt3", 00:10:25.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.901 "is_configured": true, 00:10:25.901 "data_offset": 2048, 00:10:25.901 "data_size": 63488 00:10:25.901 } 00:10:25.901 ] 00:10:25.901 } 00:10:25.901 } 00:10:25.901 }' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:25.901 pt2 00:10:25.901 pt3' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.901 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.160 [2024-11-20 09:22:51.364809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cedd8bcb-c601-4cb1-bb0e-e747f401be7e '!=' cedd8bcb-c601-4cb1-bb0e-e747f401be7e ']' 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.160 [2024-11-20 09:22:51.412490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.160 "name": "raid_bdev1", 00:10:26.160 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:26.160 "strip_size_kb": 0, 00:10:26.160 "state": "online", 00:10:26.160 "raid_level": "raid1", 00:10:26.160 "superblock": true, 00:10:26.160 "num_base_bdevs": 3, 00:10:26.160 "num_base_bdevs_discovered": 2, 00:10:26.160 "num_base_bdevs_operational": 2, 00:10:26.160 "base_bdevs_list": [ 00:10:26.160 { 00:10:26.160 "name": null, 00:10:26.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.160 "is_configured": false, 00:10:26.160 "data_offset": 0, 00:10:26.160 "data_size": 63488 00:10:26.160 }, 00:10:26.160 { 00:10:26.160 "name": "pt2", 00:10:26.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.160 "is_configured": true, 00:10:26.160 "data_offset": 2048, 00:10:26.160 "data_size": 63488 00:10:26.160 }, 00:10:26.160 { 00:10:26.160 "name": "pt3", 00:10:26.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.160 "is_configured": true, 00:10:26.160 "data_offset": 2048, 00:10:26.160 "data_size": 63488 00:10:26.160 } 00:10:26.160 ] 00:10:26.160 }' 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.160 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.728 [2024-11-20 09:22:51.883900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.728 [2024-11-20 09:22:51.883940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.728 [2024-11-20 09:22:51.884051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.728 [2024-11-20 09:22:51.884140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.728 [2024-11-20 09:22:51.884183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.728 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.728 [2024-11-20 09:22:51.971908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.729 [2024-11-20 09:22:51.972004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.729 [2024-11-20 09:22:51.972034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:26.729 [2024-11-20 09:22:51.972060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.729 [2024-11-20 09:22:51.974723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.729 [2024-11-20 09:22:51.974782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.729 [2024-11-20 09:22:51.974934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.729 [2024-11-20 09:22:51.975019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.729 pt2 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.729 09:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.729 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.729 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.729 "name": "raid_bdev1", 00:10:26.729 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:26.729 "strip_size_kb": 0, 00:10:26.729 "state": "configuring", 00:10:26.729 "raid_level": "raid1", 00:10:26.729 "superblock": true, 00:10:26.729 "num_base_bdevs": 3, 00:10:26.729 "num_base_bdevs_discovered": 1, 00:10:26.729 "num_base_bdevs_operational": 2, 00:10:26.729 "base_bdevs_list": [ 00:10:26.729 { 00:10:26.729 "name": null, 00:10:26.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.729 "is_configured": false, 00:10:26.729 "data_offset": 2048, 00:10:26.729 "data_size": 63488 00:10:26.729 }, 00:10:26.729 { 00:10:26.729 "name": "pt2", 00:10:26.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.729 "is_configured": true, 00:10:26.729 "data_offset": 2048, 00:10:26.729 "data_size": 63488 00:10:26.729 }, 00:10:26.729 { 00:10:26.729 "name": null, 00:10:26.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.729 "is_configured": false, 00:10:26.729 "data_offset": 2048, 00:10:26.729 "data_size": 63488 00:10:26.729 } 00:10:26.729 ] 00:10:26.729 }' 00:10:26.729 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.729 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.988 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:26.988 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:26.988 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:26.988 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.988 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.988 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.988 [2024-11-20 09:22:52.439298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.988 [2024-11-20 09:22:52.439397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.988 [2024-11-20 09:22:52.439444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:26.988 [2024-11-20 09:22:52.439472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.988 [2024-11-20 09:22:52.440049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.988 [2024-11-20 09:22:52.440105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.988 [2024-11-20 09:22:52.440251] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:26.988 [2024-11-20 09:22:52.440305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.988 [2024-11-20 09:22:52.440499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:26.988 [2024-11-20 09:22:52.440527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.988 [2024-11-20 09:22:52.440867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:26.988 [2024-11-20 09:22:52.441089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:26.988 [2024-11-20 09:22:52.441109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:26.988 [2024-11-20 09:22:52.441304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.249 pt3 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.249 "name": "raid_bdev1", 00:10:27.249 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:27.249 "strip_size_kb": 0, 00:10:27.249 "state": "online", 00:10:27.249 "raid_level": "raid1", 00:10:27.249 "superblock": true, 00:10:27.249 "num_base_bdevs": 3, 00:10:27.249 "num_base_bdevs_discovered": 2, 00:10:27.249 "num_base_bdevs_operational": 2, 00:10:27.249 "base_bdevs_list": [ 00:10:27.249 { 00:10:27.249 "name": null, 00:10:27.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.249 "is_configured": false, 00:10:27.249 "data_offset": 2048, 00:10:27.249 "data_size": 63488 00:10:27.249 }, 00:10:27.249 { 00:10:27.249 "name": "pt2", 00:10:27.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.249 "is_configured": true, 00:10:27.249 "data_offset": 2048, 00:10:27.249 "data_size": 63488 00:10:27.249 }, 00:10:27.249 { 00:10:27.249 "name": "pt3", 00:10:27.249 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.249 "is_configured": true, 00:10:27.249 "data_offset": 2048, 00:10:27.249 "data_size": 63488 00:10:27.249 } 00:10:27.249 ] 00:10:27.249 }' 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.249 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.508 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.508 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.508 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.508 [2024-11-20 09:22:52.926501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.508 [2024-11-20 09:22:52.926549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.508 [2024-11-20 09:22:52.926661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.508 [2024-11-20 09:22:52.926755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.508 [2024-11-20 09:22:52.926774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:27.508 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.508 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.508 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.508 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.508 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:27.508 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.769 09:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.770 [2024-11-20 09:22:52.998443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:27.770 [2024-11-20 09:22:52.998549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.770 [2024-11-20 09:22:52.998588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:27.770 [2024-11-20 09:22:52.998605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.770 [2024-11-20 09:22:53.001214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.770 [2024-11-20 09:22:53.001276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:27.770 [2024-11-20 09:22:53.001414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:27.770 [2024-11-20 09:22:53.001505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:27.770 [2024-11-20 09:22:53.001718] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:27.770 [2024-11-20 09:22:53.001742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.770 [2024-11-20 09:22:53.001771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:27.770 [2024-11-20 09:22:53.001859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:27.770 pt1 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.770 "name": "raid_bdev1", 00:10:27.770 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:27.770 "strip_size_kb": 0, 00:10:27.770 "state": "configuring", 00:10:27.770 "raid_level": "raid1", 00:10:27.770 "superblock": true, 00:10:27.770 "num_base_bdevs": 3, 00:10:27.770 "num_base_bdevs_discovered": 1, 00:10:27.770 "num_base_bdevs_operational": 2, 00:10:27.770 "base_bdevs_list": [ 00:10:27.770 { 00:10:27.770 "name": null, 00:10:27.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.770 "is_configured": false, 00:10:27.770 "data_offset": 2048, 00:10:27.770 "data_size": 63488 00:10:27.770 }, 00:10:27.770 { 00:10:27.770 "name": "pt2", 00:10:27.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.770 "is_configured": true, 00:10:27.770 "data_offset": 2048, 00:10:27.770 "data_size": 63488 00:10:27.770 }, 00:10:27.770 { 00:10:27.770 "name": null, 00:10:27.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.770 "is_configured": false, 00:10:27.770 "data_offset": 2048, 00:10:27.770 "data_size": 63488 00:10:27.770 } 00:10:27.770 ] 00:10:27.770 }' 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.770 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.053 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:28.053 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.053 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.053 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:28.053 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.313 [2024-11-20 09:22:53.529568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:28.313 [2024-11-20 09:22:53.529654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.313 [2024-11-20 09:22:53.529690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:28.313 [2024-11-20 09:22:53.529716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.313 [2024-11-20 09:22:53.530283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.313 [2024-11-20 09:22:53.530319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:28.313 [2024-11-20 09:22:53.530455] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:28.313 [2024-11-20 09:22:53.530555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:28.313 [2024-11-20 09:22:53.530760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:28.313 [2024-11-20 09:22:53.530781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.313 [2024-11-20 09:22:53.531114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:28.313 [2024-11-20 09:22:53.531336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:28.313 [2024-11-20 09:22:53.531363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:28.313 [2024-11-20 09:22:53.531589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.313 pt3 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.313 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.313 "name": "raid_bdev1", 00:10:28.313 "uuid": "cedd8bcb-c601-4cb1-bb0e-e747f401be7e", 00:10:28.313 "strip_size_kb": 0, 00:10:28.313 "state": "online", 00:10:28.313 "raid_level": "raid1", 00:10:28.313 "superblock": true, 00:10:28.313 "num_base_bdevs": 3, 00:10:28.313 "num_base_bdevs_discovered": 2, 00:10:28.313 "num_base_bdevs_operational": 2, 00:10:28.314 "base_bdevs_list": [ 00:10:28.314 { 00:10:28.314 "name": null, 00:10:28.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.314 "is_configured": false, 00:10:28.314 "data_offset": 2048, 00:10:28.314 "data_size": 63488 00:10:28.314 }, 00:10:28.314 { 00:10:28.314 "name": "pt2", 00:10:28.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.314 "is_configured": true, 00:10:28.314 "data_offset": 2048, 00:10:28.314 "data_size": 63488 00:10:28.314 }, 00:10:28.314 { 00:10:28.314 "name": "pt3", 00:10:28.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.314 "is_configured": true, 00:10:28.314 "data_offset": 2048, 00:10:28.314 "data_size": 63488 00:10:28.314 } 00:10:28.314 ] 00:10:28.314 }' 00:10:28.314 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.314 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.573 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:28.573 09:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:28.573 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.573 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.573 09:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.573 09:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.832 [2024-11-20 09:22:54.032984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cedd8bcb-c601-4cb1-bb0e-e747f401be7e '!=' cedd8bcb-c601-4cb1-bb0e-e747f401be7e ']' 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68958 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68958 ']' 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68958 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68958 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68958' 00:10:28.832 killing process with pid 68958 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68958 00:10:28.832 [2024-11-20 09:22:54.116162] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.832 09:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68958 00:10:28.832 [2024-11-20 09:22:54.116312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.832 [2024-11-20 09:22:54.116455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.832 [2024-11-20 09:22:54.116483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:29.091 [2024-11-20 09:22:54.469151] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.472 09:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:30.472 00:10:30.472 real 0m8.267s 00:10:30.472 user 0m12.970s 00:10:30.472 sys 0m1.372s 00:10:30.472 09:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.472 09:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.472 ************************************ 00:10:30.472 END TEST raid_superblock_test 00:10:30.472 ************************************ 00:10:30.472 09:22:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:30.472 09:22:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:30.472 09:22:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.472 09:22:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.472 ************************************ 00:10:30.472 START TEST raid_read_error_test 00:10:30.472 ************************************ 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rShyfMWcHZ 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69404 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69404 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69404 ']' 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.472 09:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.472 [2024-11-20 09:22:55.869390] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:10:30.472 [2024-11-20 09:22:55.869600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69404 ] 00:10:30.731 [2024-11-20 09:22:56.067090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.992 [2024-11-20 09:22:56.201003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.992 [2024-11-20 09:22:56.425153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.992 [2024-11-20 09:22:56.425215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.567 BaseBdev1_malloc 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.567 true 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.567 [2024-11-20 09:22:56.816571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:31.567 [2024-11-20 09:22:56.816637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.567 [2024-11-20 09:22:56.816658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:31.567 [2024-11-20 09:22:56.816669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.567 [2024-11-20 09:22:56.818829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.567 [2024-11-20 09:22:56.818872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:31.567 BaseBdev1 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.567 BaseBdev2_malloc 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.567 true 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.567 [2024-11-20 09:22:56.887735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:31.567 [2024-11-20 09:22:56.887814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.567 [2024-11-20 09:22:56.887836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:31.567 [2024-11-20 09:22:56.887849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.567 [2024-11-20 09:22:56.890181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.567 [2024-11-20 09:22:56.890225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:31.567 BaseBdev2 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.567 BaseBdev3_malloc 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.567 true 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:31.567 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.568 [2024-11-20 09:22:56.970101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:31.568 [2024-11-20 09:22:56.970180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.568 [2024-11-20 09:22:56.970202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:31.568 [2024-11-20 09:22:56.970215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.568 [2024-11-20 09:22:56.972610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.568 [2024-11-20 09:22:56.972665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:31.568 BaseBdev3 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.568 [2024-11-20 09:22:56.982191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.568 [2024-11-20 09:22:56.984268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.568 [2024-11-20 09:22:56.984382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.568 [2024-11-20 09:22:56.984645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:31.568 [2024-11-20 09:22:56.984669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.568 [2024-11-20 09:22:56.985005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:31.568 [2024-11-20 09:22:56.985214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:31.568 [2024-11-20 09:22:56.985236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:31.568 [2024-11-20 09:22:56.985452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.568 09:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.568 09:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.828 09:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.828 "name": "raid_bdev1", 00:10:31.828 "uuid": "bd788b25-5712-40cb-aad2-1ac98195ee3e", 00:10:31.828 "strip_size_kb": 0, 00:10:31.828 "state": "online", 00:10:31.828 "raid_level": "raid1", 00:10:31.828 "superblock": true, 00:10:31.828 "num_base_bdevs": 3, 00:10:31.828 "num_base_bdevs_discovered": 3, 00:10:31.828 "num_base_bdevs_operational": 3, 00:10:31.828 "base_bdevs_list": [ 00:10:31.828 { 00:10:31.828 "name": "BaseBdev1", 00:10:31.828 "uuid": "67f626f3-3fc0-5d34-9ae6-907746b7f011", 00:10:31.828 "is_configured": true, 00:10:31.828 "data_offset": 2048, 00:10:31.828 "data_size": 63488 00:10:31.828 }, 00:10:31.828 { 00:10:31.828 "name": "BaseBdev2", 00:10:31.828 "uuid": "905958aa-306d-5643-9344-83d2d83ed962", 00:10:31.828 "is_configured": true, 00:10:31.828 "data_offset": 2048, 00:10:31.828 "data_size": 63488 00:10:31.828 }, 00:10:31.828 { 00:10:31.828 "name": "BaseBdev3", 00:10:31.828 "uuid": "7e79360a-320f-53cc-b3cc-350c273c6cc8", 00:10:31.828 "is_configured": true, 00:10:31.828 "data_offset": 2048, 00:10:31.828 "data_size": 63488 00:10:31.828 } 00:10:31.828 ] 00:10:31.828 }' 00:10:31.828 09:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.828 09:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.088 09:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:32.088 09:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:32.346 [2024-11-20 09:22:57.554652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.285 "name": "raid_bdev1", 00:10:33.285 "uuid": "bd788b25-5712-40cb-aad2-1ac98195ee3e", 00:10:33.285 "strip_size_kb": 0, 00:10:33.285 "state": "online", 00:10:33.285 "raid_level": "raid1", 00:10:33.285 "superblock": true, 00:10:33.285 "num_base_bdevs": 3, 00:10:33.285 "num_base_bdevs_discovered": 3, 00:10:33.285 "num_base_bdevs_operational": 3, 00:10:33.285 "base_bdevs_list": [ 00:10:33.285 { 00:10:33.285 "name": "BaseBdev1", 00:10:33.285 "uuid": "67f626f3-3fc0-5d34-9ae6-907746b7f011", 00:10:33.285 "is_configured": true, 00:10:33.285 "data_offset": 2048, 00:10:33.285 "data_size": 63488 00:10:33.285 }, 00:10:33.285 { 00:10:33.285 "name": "BaseBdev2", 00:10:33.285 "uuid": "905958aa-306d-5643-9344-83d2d83ed962", 00:10:33.285 "is_configured": true, 00:10:33.285 "data_offset": 2048, 00:10:33.285 "data_size": 63488 00:10:33.285 }, 00:10:33.285 { 00:10:33.285 "name": "BaseBdev3", 00:10:33.285 "uuid": "7e79360a-320f-53cc-b3cc-350c273c6cc8", 00:10:33.285 "is_configured": true, 00:10:33.285 "data_offset": 2048, 00:10:33.285 "data_size": 63488 00:10:33.285 } 00:10:33.285 ] 00:10:33.285 }' 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.285 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.544 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.544 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.544 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.544 [2024-11-20 09:22:58.953667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.544 [2024-11-20 09:22:58.953710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.544 [2024-11-20 09:22:58.956988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.544 [2024-11-20 09:22:58.957051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.545 [2024-11-20 09:22:58.957170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.545 [2024-11-20 09:22:58.957188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:33.545 { 00:10:33.545 "results": [ 00:10:33.545 { 00:10:33.545 "job": "raid_bdev1", 00:10:33.545 "core_mask": "0x1", 00:10:33.545 "workload": "randrw", 00:10:33.545 "percentage": 50, 00:10:33.545 "status": "finished", 00:10:33.545 "queue_depth": 1, 00:10:33.545 "io_size": 131072, 00:10:33.545 "runtime": 1.399653, 00:10:33.545 "iops": 11444.265114281898, 00:10:33.545 "mibps": 1430.5331392852372, 00:10:33.545 "io_failed": 0, 00:10:33.545 "io_timeout": 0, 00:10:33.545 "avg_latency_us": 84.19088176456509, 00:10:33.545 "min_latency_us": 23.699563318777294, 00:10:33.545 "max_latency_us": 1767.1825327510917 00:10:33.545 } 00:10:33.545 ], 00:10:33.545 "core_count": 1 00:10:33.545 } 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69404 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69404 ']' 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69404 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69404 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.545 killing process with pid 69404 00:10:33.545 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69404' 00:10:33.804 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69404 00:10:33.804 [2024-11-20 09:22:58.998832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.804 09:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69404 00:10:34.064 [2024-11-20 09:22:59.278091] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rShyfMWcHZ 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:35.447 00:10:35.447 real 0m4.885s 00:10:35.447 user 0m5.836s 00:10:35.447 sys 0m0.569s 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.447 09:23:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.447 ************************************ 00:10:35.447 END TEST raid_read_error_test 00:10:35.447 ************************************ 00:10:35.447 09:23:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:35.447 09:23:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:35.447 09:23:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.447 09:23:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.447 ************************************ 00:10:35.447 START TEST raid_write_error_test 00:10:35.447 ************************************ 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tz2UPOzLsK 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69555 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69555 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69555 ']' 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.447 09:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.448 09:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.448 09:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.448 [2024-11-20 09:23:00.836947] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:10:35.448 [2024-11-20 09:23:00.837150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69555 ] 00:10:35.707 [2024-11-20 09:23:01.036736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.967 [2024-11-20 09:23:01.175287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.967 [2024-11-20 09:23:01.416789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.967 [2024-11-20 09:23:01.416840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.537 BaseBdev1_malloc 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.537 true 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.537 [2024-11-20 09:23:01.827953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:36.537 [2024-11-20 09:23:01.828025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.537 [2024-11-20 09:23:01.828050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:36.537 [2024-11-20 09:23:01.828062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.537 [2024-11-20 09:23:01.830573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.537 [2024-11-20 09:23:01.830617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:36.537 BaseBdev1 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.537 BaseBdev2_malloc 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.537 true 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.537 [2024-11-20 09:23:01.901168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:36.537 [2024-11-20 09:23:01.901257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.537 [2024-11-20 09:23:01.901278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:36.537 [2024-11-20 09:23:01.901290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.537 [2024-11-20 09:23:01.903727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.537 [2024-11-20 09:23:01.903776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:36.537 BaseBdev2 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.537 BaseBdev3_malloc 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.537 true 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.537 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.538 [2024-11-20 09:23:01.984580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:36.538 [2024-11-20 09:23:01.984652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.538 [2024-11-20 09:23:01.984675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:36.538 [2024-11-20 09:23:01.984688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.538 [2024-11-20 09:23:01.987191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.538 [2024-11-20 09:23:01.987238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:36.538 BaseBdev3 00:10:36.538 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.538 09:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:36.797 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.797 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.797 [2024-11-20 09:23:01.996640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.797 [2024-11-20 09:23:01.998666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.797 [2024-11-20 09:23:01.998756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.797 [2024-11-20 09:23:01.998992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:36.798 [2024-11-20 09:23:01.999015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.798 [2024-11-20 09:23:01.999316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:36.798 [2024-11-20 09:23:01.999540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:36.798 [2024-11-20 09:23:01.999565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:36.798 [2024-11-20 09:23:01.999775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.798 09:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.798 "name": "raid_bdev1", 00:10:36.798 "uuid": "d4756760-da08-4d1f-b382-7626a6386297", 00:10:36.798 "strip_size_kb": 0, 00:10:36.798 "state": "online", 00:10:36.798 "raid_level": "raid1", 00:10:36.798 "superblock": true, 00:10:36.798 "num_base_bdevs": 3, 00:10:36.798 "num_base_bdevs_discovered": 3, 00:10:36.798 "num_base_bdevs_operational": 3, 00:10:36.798 "base_bdevs_list": [ 00:10:36.798 { 00:10:36.798 "name": "BaseBdev1", 00:10:36.798 "uuid": "5d4f96cb-5459-54f8-baff-996fea77d2c4", 00:10:36.798 "is_configured": true, 00:10:36.798 "data_offset": 2048, 00:10:36.798 "data_size": 63488 00:10:36.798 }, 00:10:36.798 { 00:10:36.798 "name": "BaseBdev2", 00:10:36.798 "uuid": "46718528-9a11-546c-97ae-a90bc3a556e6", 00:10:36.798 "is_configured": true, 00:10:36.798 "data_offset": 2048, 00:10:36.798 "data_size": 63488 00:10:36.798 }, 00:10:36.798 { 00:10:36.798 "name": "BaseBdev3", 00:10:36.798 "uuid": "2da4bb36-e7ff-535d-aad6-a1d9c2831ec3", 00:10:36.798 "is_configured": true, 00:10:36.798 "data_offset": 2048, 00:10:36.798 "data_size": 63488 00:10:36.798 } 00:10:36.798 ] 00:10:36.798 }' 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.798 09:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.057 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:37.057 09:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:37.316 [2024-11-20 09:23:02.601238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.292 [2024-11-20 09:23:03.510274] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:38.292 [2024-11-20 09:23:03.510356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.292 [2024-11-20 09:23:03.510697] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.292 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.293 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.293 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.293 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.293 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.293 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.293 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.293 "name": "raid_bdev1", 00:10:38.293 "uuid": "d4756760-da08-4d1f-b382-7626a6386297", 00:10:38.293 "strip_size_kb": 0, 00:10:38.293 "state": "online", 00:10:38.293 "raid_level": "raid1", 00:10:38.293 "superblock": true, 00:10:38.293 "num_base_bdevs": 3, 00:10:38.293 "num_base_bdevs_discovered": 2, 00:10:38.293 "num_base_bdevs_operational": 2, 00:10:38.293 "base_bdevs_list": [ 00:10:38.293 { 00:10:38.293 "name": null, 00:10:38.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.293 "is_configured": false, 00:10:38.293 "data_offset": 0, 00:10:38.293 "data_size": 63488 00:10:38.293 }, 00:10:38.293 { 00:10:38.293 "name": "BaseBdev2", 00:10:38.293 "uuid": "46718528-9a11-546c-97ae-a90bc3a556e6", 00:10:38.293 "is_configured": true, 00:10:38.293 "data_offset": 2048, 00:10:38.293 "data_size": 63488 00:10:38.293 }, 00:10:38.293 { 00:10:38.293 "name": "BaseBdev3", 00:10:38.293 "uuid": "2da4bb36-e7ff-535d-aad6-a1d9c2831ec3", 00:10:38.293 "is_configured": true, 00:10:38.293 "data_offset": 2048, 00:10:38.293 "data_size": 63488 00:10:38.293 } 00:10:38.293 ] 00:10:38.293 }' 00:10:38.293 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.293 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.552 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.552 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.552 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.552 [2024-11-20 09:23:03.989592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.552 [2024-11-20 09:23:03.989639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.552 [2024-11-20 09:23:03.992854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.552 [2024-11-20 09:23:03.992935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.552 [2024-11-20 09:23:03.993028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.552 [2024-11-20 09:23:03.993045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:38.552 { 00:10:38.552 "results": [ 00:10:38.552 { 00:10:38.552 "job": "raid_bdev1", 00:10:38.552 "core_mask": "0x1", 00:10:38.552 "workload": "randrw", 00:10:38.552 "percentage": 50, 00:10:38.552 "status": "finished", 00:10:38.552 "queue_depth": 1, 00:10:38.552 "io_size": 131072, 00:10:38.552 "runtime": 1.38882, 00:10:38.552 "iops": 12447.977419680015, 00:10:38.552 "mibps": 1555.997177460002, 00:10:38.552 "io_failed": 0, 00:10:38.552 "io_timeout": 0, 00:10:38.552 "avg_latency_us": 77.09942792941163, 00:10:38.552 "min_latency_us": 27.165065502183406, 00:10:38.552 "max_latency_us": 1674.172925764192 00:10:38.552 } 00:10:38.552 ], 00:10:38.552 "core_count": 1 00:10:38.552 } 00:10:38.552 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.552 09:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69555 00:10:38.552 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69555 ']' 00:10:38.552 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69555 00:10:38.552 09:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:38.552 09:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.810 09:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69555 00:10:38.810 09:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.810 09:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.810 killing process with pid 69555 00:10:38.810 09:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69555' 00:10:38.810 09:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69555 00:10:38.810 09:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69555 00:10:38.810 [2024-11-20 09:23:04.037533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.068 [2024-11-20 09:23:04.312244] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.446 09:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tz2UPOzLsK 00:10:40.446 09:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:40.446 09:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:40.446 09:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:40.447 09:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:40.447 09:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.447 09:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:40.447 09:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:40.447 00:10:40.447 real 0m4.986s 00:10:40.447 user 0m5.981s 00:10:40.447 sys 0m0.654s 00:10:40.447 09:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.447 09:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.447 ************************************ 00:10:40.447 END TEST raid_write_error_test 00:10:40.447 ************************************ 00:10:40.447 09:23:05 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:40.447 09:23:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:40.447 09:23:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:40.447 09:23:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:40.447 09:23:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.447 09:23:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.447 ************************************ 00:10:40.447 START TEST raid_state_function_test 00:10:40.447 ************************************ 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69699 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69699' 00:10:40.447 Process raid pid: 69699 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69699 00:10:40.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69699 ']' 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.447 09:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.447 [2024-11-20 09:23:05.849963] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:10:40.447 [2024-11-20 09:23:05.850097] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.707 [2024-11-20 09:23:06.026761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.967 [2024-11-20 09:23:06.161064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.967 [2024-11-20 09:23:06.397899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.967 [2024-11-20 09:23:06.397948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.536 [2024-11-20 09:23:06.763596] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.536 [2024-11-20 09:23:06.763665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.536 [2024-11-20 09:23:06.763677] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.536 [2024-11-20 09:23:06.763688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.536 [2024-11-20 09:23:06.763695] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.536 [2024-11-20 09:23:06.763714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.536 [2024-11-20 09:23:06.763722] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.536 [2024-11-20 09:23:06.763747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.536 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.536 "name": "Existed_Raid", 00:10:41.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.536 "strip_size_kb": 64, 00:10:41.536 "state": "configuring", 00:10:41.536 "raid_level": "raid0", 00:10:41.536 "superblock": false, 00:10:41.536 "num_base_bdevs": 4, 00:10:41.536 "num_base_bdevs_discovered": 0, 00:10:41.536 "num_base_bdevs_operational": 4, 00:10:41.536 "base_bdevs_list": [ 00:10:41.536 { 00:10:41.536 "name": "BaseBdev1", 00:10:41.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.536 "is_configured": false, 00:10:41.536 "data_offset": 0, 00:10:41.537 "data_size": 0 00:10:41.537 }, 00:10:41.537 { 00:10:41.537 "name": "BaseBdev2", 00:10:41.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.537 "is_configured": false, 00:10:41.537 "data_offset": 0, 00:10:41.537 "data_size": 0 00:10:41.537 }, 00:10:41.537 { 00:10:41.537 "name": "BaseBdev3", 00:10:41.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.537 "is_configured": false, 00:10:41.537 "data_offset": 0, 00:10:41.537 "data_size": 0 00:10:41.537 }, 00:10:41.537 { 00:10:41.537 "name": "BaseBdev4", 00:10:41.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.537 "is_configured": false, 00:10:41.537 "data_offset": 0, 00:10:41.537 "data_size": 0 00:10:41.537 } 00:10:41.537 ] 00:10:41.537 }' 00:10:41.537 09:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.537 09:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.796 [2024-11-20 09:23:07.218741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.796 [2024-11-20 09:23:07.218864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.796 [2024-11-20 09:23:07.230737] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.796 [2024-11-20 09:23:07.230862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.796 [2024-11-20 09:23:07.230894] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.796 [2024-11-20 09:23:07.230921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.796 [2024-11-20 09:23:07.230942] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.796 [2024-11-20 09:23:07.230967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.796 [2024-11-20 09:23:07.230995] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.796 [2024-11-20 09:23:07.231031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.796 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.055 [2024-11-20 09:23:07.284830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.055 BaseBdev1 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.055 [ 00:10:42.055 { 00:10:42.055 "name": "BaseBdev1", 00:10:42.055 "aliases": [ 00:10:42.055 "e9ef9fce-937b-4ddf-b5b5-859d6ebb82f4" 00:10:42.055 ], 00:10:42.055 "product_name": "Malloc disk", 00:10:42.055 "block_size": 512, 00:10:42.055 "num_blocks": 65536, 00:10:42.055 "uuid": "e9ef9fce-937b-4ddf-b5b5-859d6ebb82f4", 00:10:42.055 "assigned_rate_limits": { 00:10:42.055 "rw_ios_per_sec": 0, 00:10:42.055 "rw_mbytes_per_sec": 0, 00:10:42.055 "r_mbytes_per_sec": 0, 00:10:42.055 "w_mbytes_per_sec": 0 00:10:42.055 }, 00:10:42.055 "claimed": true, 00:10:42.055 "claim_type": "exclusive_write", 00:10:42.055 "zoned": false, 00:10:42.055 "supported_io_types": { 00:10:42.055 "read": true, 00:10:42.055 "write": true, 00:10:42.055 "unmap": true, 00:10:42.055 "flush": true, 00:10:42.055 "reset": true, 00:10:42.055 "nvme_admin": false, 00:10:42.055 "nvme_io": false, 00:10:42.055 "nvme_io_md": false, 00:10:42.055 "write_zeroes": true, 00:10:42.055 "zcopy": true, 00:10:42.055 "get_zone_info": false, 00:10:42.055 "zone_management": false, 00:10:42.055 "zone_append": false, 00:10:42.055 "compare": false, 00:10:42.055 "compare_and_write": false, 00:10:42.055 "abort": true, 00:10:42.055 "seek_hole": false, 00:10:42.055 "seek_data": false, 00:10:42.055 "copy": true, 00:10:42.055 "nvme_iov_md": false 00:10:42.055 }, 00:10:42.055 "memory_domains": [ 00:10:42.055 { 00:10:42.055 "dma_device_id": "system", 00:10:42.055 "dma_device_type": 1 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.055 "dma_device_type": 2 00:10:42.055 } 00:10:42.055 ], 00:10:42.055 "driver_specific": {} 00:10:42.055 } 00:10:42.055 ] 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.055 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.055 "name": "Existed_Raid", 00:10:42.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.055 "strip_size_kb": 64, 00:10:42.055 "state": "configuring", 00:10:42.055 "raid_level": "raid0", 00:10:42.055 "superblock": false, 00:10:42.055 "num_base_bdevs": 4, 00:10:42.055 "num_base_bdevs_discovered": 1, 00:10:42.055 "num_base_bdevs_operational": 4, 00:10:42.055 "base_bdevs_list": [ 00:10:42.055 { 00:10:42.055 "name": "BaseBdev1", 00:10:42.055 "uuid": "e9ef9fce-937b-4ddf-b5b5-859d6ebb82f4", 00:10:42.055 "is_configured": true, 00:10:42.055 "data_offset": 0, 00:10:42.055 "data_size": 65536 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "name": "BaseBdev2", 00:10:42.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.055 "is_configured": false, 00:10:42.055 "data_offset": 0, 00:10:42.055 "data_size": 0 00:10:42.056 }, 00:10:42.056 { 00:10:42.056 "name": "BaseBdev3", 00:10:42.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.056 "is_configured": false, 00:10:42.056 "data_offset": 0, 00:10:42.056 "data_size": 0 00:10:42.056 }, 00:10:42.056 { 00:10:42.056 "name": "BaseBdev4", 00:10:42.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.056 "is_configured": false, 00:10:42.056 "data_offset": 0, 00:10:42.056 "data_size": 0 00:10:42.056 } 00:10:42.056 ] 00:10:42.056 }' 00:10:42.056 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.056 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.625 [2024-11-20 09:23:07.816156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.625 [2024-11-20 09:23:07.816230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.625 [2024-11-20 09:23:07.828226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.625 [2024-11-20 09:23:07.830409] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.625 [2024-11-20 09:23:07.830488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.625 [2024-11-20 09:23:07.830501] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.625 [2024-11-20 09:23:07.830514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.625 [2024-11-20 09:23:07.830521] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:42.625 [2024-11-20 09:23:07.830531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.625 "name": "Existed_Raid", 00:10:42.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.625 "strip_size_kb": 64, 00:10:42.625 "state": "configuring", 00:10:42.625 "raid_level": "raid0", 00:10:42.625 "superblock": false, 00:10:42.625 "num_base_bdevs": 4, 00:10:42.625 "num_base_bdevs_discovered": 1, 00:10:42.625 "num_base_bdevs_operational": 4, 00:10:42.625 "base_bdevs_list": [ 00:10:42.625 { 00:10:42.625 "name": "BaseBdev1", 00:10:42.625 "uuid": "e9ef9fce-937b-4ddf-b5b5-859d6ebb82f4", 00:10:42.625 "is_configured": true, 00:10:42.625 "data_offset": 0, 00:10:42.625 "data_size": 65536 00:10:42.625 }, 00:10:42.625 { 00:10:42.625 "name": "BaseBdev2", 00:10:42.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.625 "is_configured": false, 00:10:42.625 "data_offset": 0, 00:10:42.625 "data_size": 0 00:10:42.625 }, 00:10:42.625 { 00:10:42.625 "name": "BaseBdev3", 00:10:42.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.625 "is_configured": false, 00:10:42.625 "data_offset": 0, 00:10:42.625 "data_size": 0 00:10:42.625 }, 00:10:42.625 { 00:10:42.625 "name": "BaseBdev4", 00:10:42.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.625 "is_configured": false, 00:10:42.625 "data_offset": 0, 00:10:42.625 "data_size": 0 00:10:42.625 } 00:10:42.625 ] 00:10:42.625 }' 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.625 09:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.885 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.885 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.885 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.145 [2024-11-20 09:23:08.356210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.145 BaseBdev2 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.145 [ 00:10:43.145 { 00:10:43.145 "name": "BaseBdev2", 00:10:43.145 "aliases": [ 00:10:43.145 "0ebbcc7e-a6c6-4819-aff3-ce79476985e3" 00:10:43.145 ], 00:10:43.145 "product_name": "Malloc disk", 00:10:43.145 "block_size": 512, 00:10:43.145 "num_blocks": 65536, 00:10:43.145 "uuid": "0ebbcc7e-a6c6-4819-aff3-ce79476985e3", 00:10:43.145 "assigned_rate_limits": { 00:10:43.145 "rw_ios_per_sec": 0, 00:10:43.145 "rw_mbytes_per_sec": 0, 00:10:43.145 "r_mbytes_per_sec": 0, 00:10:43.145 "w_mbytes_per_sec": 0 00:10:43.145 }, 00:10:43.145 "claimed": true, 00:10:43.145 "claim_type": "exclusive_write", 00:10:43.145 "zoned": false, 00:10:43.145 "supported_io_types": { 00:10:43.145 "read": true, 00:10:43.145 "write": true, 00:10:43.145 "unmap": true, 00:10:43.145 "flush": true, 00:10:43.145 "reset": true, 00:10:43.145 "nvme_admin": false, 00:10:43.145 "nvme_io": false, 00:10:43.145 "nvme_io_md": false, 00:10:43.145 "write_zeroes": true, 00:10:43.145 "zcopy": true, 00:10:43.145 "get_zone_info": false, 00:10:43.145 "zone_management": false, 00:10:43.145 "zone_append": false, 00:10:43.145 "compare": false, 00:10:43.145 "compare_and_write": false, 00:10:43.145 "abort": true, 00:10:43.145 "seek_hole": false, 00:10:43.145 "seek_data": false, 00:10:43.145 "copy": true, 00:10:43.145 "nvme_iov_md": false 00:10:43.145 }, 00:10:43.145 "memory_domains": [ 00:10:43.145 { 00:10:43.145 "dma_device_id": "system", 00:10:43.145 "dma_device_type": 1 00:10:43.145 }, 00:10:43.145 { 00:10:43.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.145 "dma_device_type": 2 00:10:43.145 } 00:10:43.145 ], 00:10:43.145 "driver_specific": {} 00:10:43.145 } 00:10:43.145 ] 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.145 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.145 "name": "Existed_Raid", 00:10:43.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.145 "strip_size_kb": 64, 00:10:43.145 "state": "configuring", 00:10:43.145 "raid_level": "raid0", 00:10:43.145 "superblock": false, 00:10:43.145 "num_base_bdevs": 4, 00:10:43.145 "num_base_bdevs_discovered": 2, 00:10:43.145 "num_base_bdevs_operational": 4, 00:10:43.145 "base_bdevs_list": [ 00:10:43.145 { 00:10:43.145 "name": "BaseBdev1", 00:10:43.145 "uuid": "e9ef9fce-937b-4ddf-b5b5-859d6ebb82f4", 00:10:43.145 "is_configured": true, 00:10:43.145 "data_offset": 0, 00:10:43.145 "data_size": 65536 00:10:43.145 }, 00:10:43.145 { 00:10:43.145 "name": "BaseBdev2", 00:10:43.145 "uuid": "0ebbcc7e-a6c6-4819-aff3-ce79476985e3", 00:10:43.145 "is_configured": true, 00:10:43.145 "data_offset": 0, 00:10:43.146 "data_size": 65536 00:10:43.146 }, 00:10:43.146 { 00:10:43.146 "name": "BaseBdev3", 00:10:43.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.146 "is_configured": false, 00:10:43.146 "data_offset": 0, 00:10:43.146 "data_size": 0 00:10:43.146 }, 00:10:43.146 { 00:10:43.146 "name": "BaseBdev4", 00:10:43.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.146 "is_configured": false, 00:10:43.146 "data_offset": 0, 00:10:43.146 "data_size": 0 00:10:43.146 } 00:10:43.146 ] 00:10:43.146 }' 00:10:43.146 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.146 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.405 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:43.405 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.405 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.665 [2024-11-20 09:23:08.897845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.665 BaseBdev3 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.665 [ 00:10:43.665 { 00:10:43.665 "name": "BaseBdev3", 00:10:43.665 "aliases": [ 00:10:43.665 "a6233716-196e-44c9-a00b-db2cce4d8a40" 00:10:43.665 ], 00:10:43.665 "product_name": "Malloc disk", 00:10:43.665 "block_size": 512, 00:10:43.665 "num_blocks": 65536, 00:10:43.665 "uuid": "a6233716-196e-44c9-a00b-db2cce4d8a40", 00:10:43.665 "assigned_rate_limits": { 00:10:43.665 "rw_ios_per_sec": 0, 00:10:43.665 "rw_mbytes_per_sec": 0, 00:10:43.665 "r_mbytes_per_sec": 0, 00:10:43.665 "w_mbytes_per_sec": 0 00:10:43.665 }, 00:10:43.665 "claimed": true, 00:10:43.665 "claim_type": "exclusive_write", 00:10:43.665 "zoned": false, 00:10:43.665 "supported_io_types": { 00:10:43.665 "read": true, 00:10:43.665 "write": true, 00:10:43.665 "unmap": true, 00:10:43.665 "flush": true, 00:10:43.665 "reset": true, 00:10:43.665 "nvme_admin": false, 00:10:43.665 "nvme_io": false, 00:10:43.665 "nvme_io_md": false, 00:10:43.665 "write_zeroes": true, 00:10:43.665 "zcopy": true, 00:10:43.665 "get_zone_info": false, 00:10:43.665 "zone_management": false, 00:10:43.665 "zone_append": false, 00:10:43.665 "compare": false, 00:10:43.665 "compare_and_write": false, 00:10:43.665 "abort": true, 00:10:43.665 "seek_hole": false, 00:10:43.665 "seek_data": false, 00:10:43.665 "copy": true, 00:10:43.665 "nvme_iov_md": false 00:10:43.665 }, 00:10:43.665 "memory_domains": [ 00:10:43.665 { 00:10:43.665 "dma_device_id": "system", 00:10:43.665 "dma_device_type": 1 00:10:43.665 }, 00:10:43.665 { 00:10:43.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.665 "dma_device_type": 2 00:10:43.665 } 00:10:43.665 ], 00:10:43.665 "driver_specific": {} 00:10:43.665 } 00:10:43.665 ] 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.665 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.665 "name": "Existed_Raid", 00:10:43.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.665 "strip_size_kb": 64, 00:10:43.665 "state": "configuring", 00:10:43.665 "raid_level": "raid0", 00:10:43.666 "superblock": false, 00:10:43.666 "num_base_bdevs": 4, 00:10:43.666 "num_base_bdevs_discovered": 3, 00:10:43.666 "num_base_bdevs_operational": 4, 00:10:43.666 "base_bdevs_list": [ 00:10:43.666 { 00:10:43.666 "name": "BaseBdev1", 00:10:43.666 "uuid": "e9ef9fce-937b-4ddf-b5b5-859d6ebb82f4", 00:10:43.666 "is_configured": true, 00:10:43.666 "data_offset": 0, 00:10:43.666 "data_size": 65536 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "name": "BaseBdev2", 00:10:43.666 "uuid": "0ebbcc7e-a6c6-4819-aff3-ce79476985e3", 00:10:43.666 "is_configured": true, 00:10:43.666 "data_offset": 0, 00:10:43.666 "data_size": 65536 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "name": "BaseBdev3", 00:10:43.666 "uuid": "a6233716-196e-44c9-a00b-db2cce4d8a40", 00:10:43.666 "is_configured": true, 00:10:43.666 "data_offset": 0, 00:10:43.666 "data_size": 65536 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "name": "BaseBdev4", 00:10:43.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.666 "is_configured": false, 00:10:43.666 "data_offset": 0, 00:10:43.666 "data_size": 0 00:10:43.666 } 00:10:43.666 ] 00:10:43.666 }' 00:10:43.666 09:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.666 09:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.925 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:43.925 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.925 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.184 [2024-11-20 09:23:09.415271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.184 [2024-11-20 09:23:09.415336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:44.184 [2024-11-20 09:23:09.415346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:44.184 [2024-11-20 09:23:09.415742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:44.184 [2024-11-20 09:23:09.415948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:44.184 [2024-11-20 09:23:09.415975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:44.184 [2024-11-20 09:23:09.416311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.184 BaseBdev4 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.184 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.185 [ 00:10:44.185 { 00:10:44.185 "name": "BaseBdev4", 00:10:44.185 "aliases": [ 00:10:44.185 "f0e54d4f-f7d3-4d4b-b1d4-e502b2f58ae3" 00:10:44.185 ], 00:10:44.185 "product_name": "Malloc disk", 00:10:44.185 "block_size": 512, 00:10:44.185 "num_blocks": 65536, 00:10:44.185 "uuid": "f0e54d4f-f7d3-4d4b-b1d4-e502b2f58ae3", 00:10:44.185 "assigned_rate_limits": { 00:10:44.185 "rw_ios_per_sec": 0, 00:10:44.185 "rw_mbytes_per_sec": 0, 00:10:44.185 "r_mbytes_per_sec": 0, 00:10:44.185 "w_mbytes_per_sec": 0 00:10:44.185 }, 00:10:44.185 "claimed": true, 00:10:44.185 "claim_type": "exclusive_write", 00:10:44.185 "zoned": false, 00:10:44.185 "supported_io_types": { 00:10:44.185 "read": true, 00:10:44.185 "write": true, 00:10:44.185 "unmap": true, 00:10:44.185 "flush": true, 00:10:44.185 "reset": true, 00:10:44.185 "nvme_admin": false, 00:10:44.185 "nvme_io": false, 00:10:44.185 "nvme_io_md": false, 00:10:44.185 "write_zeroes": true, 00:10:44.185 "zcopy": true, 00:10:44.185 "get_zone_info": false, 00:10:44.185 "zone_management": false, 00:10:44.185 "zone_append": false, 00:10:44.185 "compare": false, 00:10:44.185 "compare_and_write": false, 00:10:44.185 "abort": true, 00:10:44.185 "seek_hole": false, 00:10:44.185 "seek_data": false, 00:10:44.185 "copy": true, 00:10:44.185 "nvme_iov_md": false 00:10:44.185 }, 00:10:44.185 "memory_domains": [ 00:10:44.185 { 00:10:44.185 "dma_device_id": "system", 00:10:44.185 "dma_device_type": 1 00:10:44.185 }, 00:10:44.185 { 00:10:44.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.185 "dma_device_type": 2 00:10:44.185 } 00:10:44.185 ], 00:10:44.185 "driver_specific": {} 00:10:44.185 } 00:10:44.185 ] 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.185 "name": "Existed_Raid", 00:10:44.185 "uuid": "bf5b0968-b19a-453c-98c7-b6c6e718cc23", 00:10:44.185 "strip_size_kb": 64, 00:10:44.185 "state": "online", 00:10:44.185 "raid_level": "raid0", 00:10:44.185 "superblock": false, 00:10:44.185 "num_base_bdevs": 4, 00:10:44.185 "num_base_bdevs_discovered": 4, 00:10:44.185 "num_base_bdevs_operational": 4, 00:10:44.185 "base_bdevs_list": [ 00:10:44.185 { 00:10:44.185 "name": "BaseBdev1", 00:10:44.185 "uuid": "e9ef9fce-937b-4ddf-b5b5-859d6ebb82f4", 00:10:44.185 "is_configured": true, 00:10:44.185 "data_offset": 0, 00:10:44.185 "data_size": 65536 00:10:44.185 }, 00:10:44.185 { 00:10:44.185 "name": "BaseBdev2", 00:10:44.185 "uuid": "0ebbcc7e-a6c6-4819-aff3-ce79476985e3", 00:10:44.185 "is_configured": true, 00:10:44.185 "data_offset": 0, 00:10:44.185 "data_size": 65536 00:10:44.185 }, 00:10:44.185 { 00:10:44.185 "name": "BaseBdev3", 00:10:44.185 "uuid": "a6233716-196e-44c9-a00b-db2cce4d8a40", 00:10:44.185 "is_configured": true, 00:10:44.185 "data_offset": 0, 00:10:44.185 "data_size": 65536 00:10:44.185 }, 00:10:44.185 { 00:10:44.185 "name": "BaseBdev4", 00:10:44.185 "uuid": "f0e54d4f-f7d3-4d4b-b1d4-e502b2f58ae3", 00:10:44.185 "is_configured": true, 00:10:44.185 "data_offset": 0, 00:10:44.185 "data_size": 65536 00:10:44.185 } 00:10:44.185 ] 00:10:44.185 }' 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.185 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.755 [2024-11-20 09:23:09.954834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.755 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.755 "name": "Existed_Raid", 00:10:44.755 "aliases": [ 00:10:44.755 "bf5b0968-b19a-453c-98c7-b6c6e718cc23" 00:10:44.755 ], 00:10:44.755 "product_name": "Raid Volume", 00:10:44.755 "block_size": 512, 00:10:44.755 "num_blocks": 262144, 00:10:44.755 "uuid": "bf5b0968-b19a-453c-98c7-b6c6e718cc23", 00:10:44.755 "assigned_rate_limits": { 00:10:44.755 "rw_ios_per_sec": 0, 00:10:44.755 "rw_mbytes_per_sec": 0, 00:10:44.755 "r_mbytes_per_sec": 0, 00:10:44.755 "w_mbytes_per_sec": 0 00:10:44.755 }, 00:10:44.755 "claimed": false, 00:10:44.755 "zoned": false, 00:10:44.755 "supported_io_types": { 00:10:44.755 "read": true, 00:10:44.755 "write": true, 00:10:44.755 "unmap": true, 00:10:44.755 "flush": true, 00:10:44.755 "reset": true, 00:10:44.755 "nvme_admin": false, 00:10:44.755 "nvme_io": false, 00:10:44.755 "nvme_io_md": false, 00:10:44.755 "write_zeroes": true, 00:10:44.755 "zcopy": false, 00:10:44.755 "get_zone_info": false, 00:10:44.755 "zone_management": false, 00:10:44.755 "zone_append": false, 00:10:44.755 "compare": false, 00:10:44.755 "compare_and_write": false, 00:10:44.755 "abort": false, 00:10:44.755 "seek_hole": false, 00:10:44.755 "seek_data": false, 00:10:44.755 "copy": false, 00:10:44.755 "nvme_iov_md": false 00:10:44.755 }, 00:10:44.755 "memory_domains": [ 00:10:44.755 { 00:10:44.755 "dma_device_id": "system", 00:10:44.755 "dma_device_type": 1 00:10:44.755 }, 00:10:44.755 { 00:10:44.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.755 "dma_device_type": 2 00:10:44.755 }, 00:10:44.755 { 00:10:44.755 "dma_device_id": "system", 00:10:44.755 "dma_device_type": 1 00:10:44.755 }, 00:10:44.755 { 00:10:44.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.755 "dma_device_type": 2 00:10:44.755 }, 00:10:44.755 { 00:10:44.755 "dma_device_id": "system", 00:10:44.755 "dma_device_type": 1 00:10:44.755 }, 00:10:44.755 { 00:10:44.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.755 "dma_device_type": 2 00:10:44.755 }, 00:10:44.755 { 00:10:44.755 "dma_device_id": "system", 00:10:44.755 "dma_device_type": 1 00:10:44.755 }, 00:10:44.755 { 00:10:44.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.755 "dma_device_type": 2 00:10:44.755 } 00:10:44.755 ], 00:10:44.755 "driver_specific": { 00:10:44.755 "raid": { 00:10:44.756 "uuid": "bf5b0968-b19a-453c-98c7-b6c6e718cc23", 00:10:44.756 "strip_size_kb": 64, 00:10:44.756 "state": "online", 00:10:44.756 "raid_level": "raid0", 00:10:44.756 "superblock": false, 00:10:44.756 "num_base_bdevs": 4, 00:10:44.756 "num_base_bdevs_discovered": 4, 00:10:44.756 "num_base_bdevs_operational": 4, 00:10:44.756 "base_bdevs_list": [ 00:10:44.756 { 00:10:44.756 "name": "BaseBdev1", 00:10:44.756 "uuid": "e9ef9fce-937b-4ddf-b5b5-859d6ebb82f4", 00:10:44.756 "is_configured": true, 00:10:44.756 "data_offset": 0, 00:10:44.756 "data_size": 65536 00:10:44.756 }, 00:10:44.756 { 00:10:44.756 "name": "BaseBdev2", 00:10:44.756 "uuid": "0ebbcc7e-a6c6-4819-aff3-ce79476985e3", 00:10:44.756 "is_configured": true, 00:10:44.756 "data_offset": 0, 00:10:44.756 "data_size": 65536 00:10:44.756 }, 00:10:44.756 { 00:10:44.756 "name": "BaseBdev3", 00:10:44.756 "uuid": "a6233716-196e-44c9-a00b-db2cce4d8a40", 00:10:44.756 "is_configured": true, 00:10:44.756 "data_offset": 0, 00:10:44.756 "data_size": 65536 00:10:44.756 }, 00:10:44.756 { 00:10:44.756 "name": "BaseBdev4", 00:10:44.756 "uuid": "f0e54d4f-f7d3-4d4b-b1d4-e502b2f58ae3", 00:10:44.756 "is_configured": true, 00:10:44.756 "data_offset": 0, 00:10:44.756 "data_size": 65536 00:10:44.756 } 00:10:44.756 ] 00:10:44.756 } 00:10:44.756 } 00:10:44.756 }' 00:10:44.756 09:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:44.756 BaseBdev2 00:10:44.756 BaseBdev3 00:10:44.756 BaseBdev4' 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.756 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.016 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.017 [2024-11-20 09:23:10.285962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.017 [2024-11-20 09:23:10.285998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.017 [2024-11-20 09:23:10.286056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.017 "name": "Existed_Raid", 00:10:45.017 "uuid": "bf5b0968-b19a-453c-98c7-b6c6e718cc23", 00:10:45.017 "strip_size_kb": 64, 00:10:45.017 "state": "offline", 00:10:45.017 "raid_level": "raid0", 00:10:45.017 "superblock": false, 00:10:45.017 "num_base_bdevs": 4, 00:10:45.017 "num_base_bdevs_discovered": 3, 00:10:45.017 "num_base_bdevs_operational": 3, 00:10:45.017 "base_bdevs_list": [ 00:10:45.017 { 00:10:45.017 "name": null, 00:10:45.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.017 "is_configured": false, 00:10:45.017 "data_offset": 0, 00:10:45.017 "data_size": 65536 00:10:45.017 }, 00:10:45.017 { 00:10:45.017 "name": "BaseBdev2", 00:10:45.017 "uuid": "0ebbcc7e-a6c6-4819-aff3-ce79476985e3", 00:10:45.017 "is_configured": true, 00:10:45.017 "data_offset": 0, 00:10:45.017 "data_size": 65536 00:10:45.017 }, 00:10:45.017 { 00:10:45.017 "name": "BaseBdev3", 00:10:45.017 "uuid": "a6233716-196e-44c9-a00b-db2cce4d8a40", 00:10:45.017 "is_configured": true, 00:10:45.017 "data_offset": 0, 00:10:45.017 "data_size": 65536 00:10:45.017 }, 00:10:45.017 { 00:10:45.017 "name": "BaseBdev4", 00:10:45.017 "uuid": "f0e54d4f-f7d3-4d4b-b1d4-e502b2f58ae3", 00:10:45.017 "is_configured": true, 00:10:45.017 "data_offset": 0, 00:10:45.017 "data_size": 65536 00:10:45.017 } 00:10:45.017 ] 00:10:45.017 }' 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.017 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.586 09:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.586 [2024-11-20 09:23:10.971932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.846 [2024-11-20 09:23:11.141505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.846 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.105 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.105 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.105 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:46.105 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.105 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.105 [2024-11-20 09:23:11.309499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:46.105 [2024-11-20 09:23:11.309554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:46.105 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.105 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.106 BaseBdev2 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.106 [ 00:10:46.106 { 00:10:46.106 "name": "BaseBdev2", 00:10:46.106 "aliases": [ 00:10:46.106 "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c" 00:10:46.106 ], 00:10:46.106 "product_name": "Malloc disk", 00:10:46.106 "block_size": 512, 00:10:46.106 "num_blocks": 65536, 00:10:46.106 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:46.106 "assigned_rate_limits": { 00:10:46.106 "rw_ios_per_sec": 0, 00:10:46.106 "rw_mbytes_per_sec": 0, 00:10:46.106 "r_mbytes_per_sec": 0, 00:10:46.106 "w_mbytes_per_sec": 0 00:10:46.106 }, 00:10:46.106 "claimed": false, 00:10:46.106 "zoned": false, 00:10:46.106 "supported_io_types": { 00:10:46.106 "read": true, 00:10:46.106 "write": true, 00:10:46.106 "unmap": true, 00:10:46.106 "flush": true, 00:10:46.106 "reset": true, 00:10:46.106 "nvme_admin": false, 00:10:46.106 "nvme_io": false, 00:10:46.106 "nvme_io_md": false, 00:10:46.106 "write_zeroes": true, 00:10:46.106 "zcopy": true, 00:10:46.106 "get_zone_info": false, 00:10:46.106 "zone_management": false, 00:10:46.106 "zone_append": false, 00:10:46.106 "compare": false, 00:10:46.106 "compare_and_write": false, 00:10:46.106 "abort": true, 00:10:46.106 "seek_hole": false, 00:10:46.106 "seek_data": false, 00:10:46.106 "copy": true, 00:10:46.106 "nvme_iov_md": false 00:10:46.106 }, 00:10:46.106 "memory_domains": [ 00:10:46.106 { 00:10:46.106 "dma_device_id": "system", 00:10:46.106 "dma_device_type": 1 00:10:46.106 }, 00:10:46.106 { 00:10:46.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.106 "dma_device_type": 2 00:10:46.106 } 00:10:46.106 ], 00:10:46.106 "driver_specific": {} 00:10:46.106 } 00:10:46.106 ] 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.106 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.366 BaseBdev3 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.366 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.367 [ 00:10:46.367 { 00:10:46.367 "name": "BaseBdev3", 00:10:46.367 "aliases": [ 00:10:46.367 "ca3013af-fd3e-424b-838f-ce17c56fe889" 00:10:46.367 ], 00:10:46.367 "product_name": "Malloc disk", 00:10:46.367 "block_size": 512, 00:10:46.367 "num_blocks": 65536, 00:10:46.367 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:46.367 "assigned_rate_limits": { 00:10:46.367 "rw_ios_per_sec": 0, 00:10:46.367 "rw_mbytes_per_sec": 0, 00:10:46.367 "r_mbytes_per_sec": 0, 00:10:46.367 "w_mbytes_per_sec": 0 00:10:46.367 }, 00:10:46.367 "claimed": false, 00:10:46.367 "zoned": false, 00:10:46.367 "supported_io_types": { 00:10:46.367 "read": true, 00:10:46.367 "write": true, 00:10:46.367 "unmap": true, 00:10:46.367 "flush": true, 00:10:46.367 "reset": true, 00:10:46.367 "nvme_admin": false, 00:10:46.367 "nvme_io": false, 00:10:46.367 "nvme_io_md": false, 00:10:46.367 "write_zeroes": true, 00:10:46.367 "zcopy": true, 00:10:46.367 "get_zone_info": false, 00:10:46.367 "zone_management": false, 00:10:46.367 "zone_append": false, 00:10:46.367 "compare": false, 00:10:46.367 "compare_and_write": false, 00:10:46.367 "abort": true, 00:10:46.367 "seek_hole": false, 00:10:46.367 "seek_data": false, 00:10:46.367 "copy": true, 00:10:46.367 "nvme_iov_md": false 00:10:46.367 }, 00:10:46.367 "memory_domains": [ 00:10:46.367 { 00:10:46.367 "dma_device_id": "system", 00:10:46.367 "dma_device_type": 1 00:10:46.367 }, 00:10:46.367 { 00:10:46.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.367 "dma_device_type": 2 00:10:46.367 } 00:10:46.367 ], 00:10:46.367 "driver_specific": {} 00:10:46.367 } 00:10:46.367 ] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.367 BaseBdev4 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.367 [ 00:10:46.367 { 00:10:46.367 "name": "BaseBdev4", 00:10:46.367 "aliases": [ 00:10:46.367 "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a" 00:10:46.367 ], 00:10:46.367 "product_name": "Malloc disk", 00:10:46.367 "block_size": 512, 00:10:46.367 "num_blocks": 65536, 00:10:46.367 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:46.367 "assigned_rate_limits": { 00:10:46.367 "rw_ios_per_sec": 0, 00:10:46.367 "rw_mbytes_per_sec": 0, 00:10:46.367 "r_mbytes_per_sec": 0, 00:10:46.367 "w_mbytes_per_sec": 0 00:10:46.367 }, 00:10:46.367 "claimed": false, 00:10:46.367 "zoned": false, 00:10:46.367 "supported_io_types": { 00:10:46.367 "read": true, 00:10:46.367 "write": true, 00:10:46.367 "unmap": true, 00:10:46.367 "flush": true, 00:10:46.367 "reset": true, 00:10:46.367 "nvme_admin": false, 00:10:46.367 "nvme_io": false, 00:10:46.367 "nvme_io_md": false, 00:10:46.367 "write_zeroes": true, 00:10:46.367 "zcopy": true, 00:10:46.367 "get_zone_info": false, 00:10:46.367 "zone_management": false, 00:10:46.367 "zone_append": false, 00:10:46.367 "compare": false, 00:10:46.367 "compare_and_write": false, 00:10:46.367 "abort": true, 00:10:46.367 "seek_hole": false, 00:10:46.367 "seek_data": false, 00:10:46.367 "copy": true, 00:10:46.367 "nvme_iov_md": false 00:10:46.367 }, 00:10:46.367 "memory_domains": [ 00:10:46.367 { 00:10:46.367 "dma_device_id": "system", 00:10:46.367 "dma_device_type": 1 00:10:46.367 }, 00:10:46.367 { 00:10:46.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.367 "dma_device_type": 2 00:10:46.367 } 00:10:46.367 ], 00:10:46.367 "driver_specific": {} 00:10:46.367 } 00:10:46.367 ] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.367 [2024-11-20 09:23:11.734629] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.367 [2024-11-20 09:23:11.734742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.367 [2024-11-20 09:23:11.734794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.367 [2024-11-20 09:23:11.737023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.367 [2024-11-20 09:23:11.737133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.367 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.367 "name": "Existed_Raid", 00:10:46.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.367 "strip_size_kb": 64, 00:10:46.367 "state": "configuring", 00:10:46.367 "raid_level": "raid0", 00:10:46.367 "superblock": false, 00:10:46.367 "num_base_bdevs": 4, 00:10:46.367 "num_base_bdevs_discovered": 3, 00:10:46.367 "num_base_bdevs_operational": 4, 00:10:46.367 "base_bdevs_list": [ 00:10:46.367 { 00:10:46.367 "name": "BaseBdev1", 00:10:46.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.367 "is_configured": false, 00:10:46.367 "data_offset": 0, 00:10:46.367 "data_size": 0 00:10:46.367 }, 00:10:46.367 { 00:10:46.367 "name": "BaseBdev2", 00:10:46.367 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:46.367 "is_configured": true, 00:10:46.367 "data_offset": 0, 00:10:46.368 "data_size": 65536 00:10:46.368 }, 00:10:46.368 { 00:10:46.368 "name": "BaseBdev3", 00:10:46.368 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:46.368 "is_configured": true, 00:10:46.368 "data_offset": 0, 00:10:46.368 "data_size": 65536 00:10:46.368 }, 00:10:46.368 { 00:10:46.368 "name": "BaseBdev4", 00:10:46.368 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:46.368 "is_configured": true, 00:10:46.368 "data_offset": 0, 00:10:46.368 "data_size": 65536 00:10:46.368 } 00:10:46.368 ] 00:10:46.368 }' 00:10:46.368 09:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.368 09:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.937 [2024-11-20 09:23:12.241780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.937 "name": "Existed_Raid", 00:10:46.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.937 "strip_size_kb": 64, 00:10:46.937 "state": "configuring", 00:10:46.937 "raid_level": "raid0", 00:10:46.937 "superblock": false, 00:10:46.937 "num_base_bdevs": 4, 00:10:46.937 "num_base_bdevs_discovered": 2, 00:10:46.937 "num_base_bdevs_operational": 4, 00:10:46.937 "base_bdevs_list": [ 00:10:46.937 { 00:10:46.937 "name": "BaseBdev1", 00:10:46.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.937 "is_configured": false, 00:10:46.937 "data_offset": 0, 00:10:46.937 "data_size": 0 00:10:46.937 }, 00:10:46.937 { 00:10:46.937 "name": null, 00:10:46.937 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:46.937 "is_configured": false, 00:10:46.937 "data_offset": 0, 00:10:46.937 "data_size": 65536 00:10:46.937 }, 00:10:46.937 { 00:10:46.937 "name": "BaseBdev3", 00:10:46.937 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:46.937 "is_configured": true, 00:10:46.937 "data_offset": 0, 00:10:46.937 "data_size": 65536 00:10:46.937 }, 00:10:46.937 { 00:10:46.937 "name": "BaseBdev4", 00:10:46.937 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:46.937 "is_configured": true, 00:10:46.937 "data_offset": 0, 00:10:46.937 "data_size": 65536 00:10:46.937 } 00:10:46.937 ] 00:10:46.937 }' 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.937 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.505 [2024-11-20 09:23:12.796128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.505 BaseBdev1 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.505 [ 00:10:47.505 { 00:10:47.505 "name": "BaseBdev1", 00:10:47.505 "aliases": [ 00:10:47.505 "6d66c31a-e05b-4c92-946d-03b697515db2" 00:10:47.505 ], 00:10:47.505 "product_name": "Malloc disk", 00:10:47.505 "block_size": 512, 00:10:47.505 "num_blocks": 65536, 00:10:47.505 "uuid": "6d66c31a-e05b-4c92-946d-03b697515db2", 00:10:47.505 "assigned_rate_limits": { 00:10:47.505 "rw_ios_per_sec": 0, 00:10:47.505 "rw_mbytes_per_sec": 0, 00:10:47.505 "r_mbytes_per_sec": 0, 00:10:47.505 "w_mbytes_per_sec": 0 00:10:47.505 }, 00:10:47.505 "claimed": true, 00:10:47.505 "claim_type": "exclusive_write", 00:10:47.505 "zoned": false, 00:10:47.505 "supported_io_types": { 00:10:47.505 "read": true, 00:10:47.505 "write": true, 00:10:47.505 "unmap": true, 00:10:47.505 "flush": true, 00:10:47.505 "reset": true, 00:10:47.505 "nvme_admin": false, 00:10:47.505 "nvme_io": false, 00:10:47.505 "nvme_io_md": false, 00:10:47.505 "write_zeroes": true, 00:10:47.505 "zcopy": true, 00:10:47.505 "get_zone_info": false, 00:10:47.505 "zone_management": false, 00:10:47.505 "zone_append": false, 00:10:47.505 "compare": false, 00:10:47.505 "compare_and_write": false, 00:10:47.505 "abort": true, 00:10:47.505 "seek_hole": false, 00:10:47.505 "seek_data": false, 00:10:47.505 "copy": true, 00:10:47.505 "nvme_iov_md": false 00:10:47.505 }, 00:10:47.505 "memory_domains": [ 00:10:47.505 { 00:10:47.505 "dma_device_id": "system", 00:10:47.505 "dma_device_type": 1 00:10:47.505 }, 00:10:47.505 { 00:10:47.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.505 "dma_device_type": 2 00:10:47.505 } 00:10:47.505 ], 00:10:47.505 "driver_specific": {} 00:10:47.505 } 00:10:47.505 ] 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.505 "name": "Existed_Raid", 00:10:47.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.505 "strip_size_kb": 64, 00:10:47.505 "state": "configuring", 00:10:47.505 "raid_level": "raid0", 00:10:47.505 "superblock": false, 00:10:47.505 "num_base_bdevs": 4, 00:10:47.505 "num_base_bdevs_discovered": 3, 00:10:47.505 "num_base_bdevs_operational": 4, 00:10:47.505 "base_bdevs_list": [ 00:10:47.505 { 00:10:47.505 "name": "BaseBdev1", 00:10:47.505 "uuid": "6d66c31a-e05b-4c92-946d-03b697515db2", 00:10:47.505 "is_configured": true, 00:10:47.505 "data_offset": 0, 00:10:47.505 "data_size": 65536 00:10:47.505 }, 00:10:47.505 { 00:10:47.505 "name": null, 00:10:47.505 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:47.505 "is_configured": false, 00:10:47.505 "data_offset": 0, 00:10:47.505 "data_size": 65536 00:10:47.505 }, 00:10:47.505 { 00:10:47.505 "name": "BaseBdev3", 00:10:47.505 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:47.505 "is_configured": true, 00:10:47.505 "data_offset": 0, 00:10:47.505 "data_size": 65536 00:10:47.505 }, 00:10:47.505 { 00:10:47.505 "name": "BaseBdev4", 00:10:47.505 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:47.505 "is_configured": true, 00:10:47.505 "data_offset": 0, 00:10:47.505 "data_size": 65536 00:10:47.505 } 00:10:47.505 ] 00:10:47.505 }' 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.505 09:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.068 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.068 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.068 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.068 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.068 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.068 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:48.068 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:48.068 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.069 [2024-11-20 09:23:13.351333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.069 "name": "Existed_Raid", 00:10:48.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.069 "strip_size_kb": 64, 00:10:48.069 "state": "configuring", 00:10:48.069 "raid_level": "raid0", 00:10:48.069 "superblock": false, 00:10:48.069 "num_base_bdevs": 4, 00:10:48.069 "num_base_bdevs_discovered": 2, 00:10:48.069 "num_base_bdevs_operational": 4, 00:10:48.069 "base_bdevs_list": [ 00:10:48.069 { 00:10:48.069 "name": "BaseBdev1", 00:10:48.069 "uuid": "6d66c31a-e05b-4c92-946d-03b697515db2", 00:10:48.069 "is_configured": true, 00:10:48.069 "data_offset": 0, 00:10:48.069 "data_size": 65536 00:10:48.069 }, 00:10:48.069 { 00:10:48.069 "name": null, 00:10:48.069 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:48.069 "is_configured": false, 00:10:48.069 "data_offset": 0, 00:10:48.069 "data_size": 65536 00:10:48.069 }, 00:10:48.069 { 00:10:48.069 "name": null, 00:10:48.069 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:48.069 "is_configured": false, 00:10:48.069 "data_offset": 0, 00:10:48.069 "data_size": 65536 00:10:48.069 }, 00:10:48.069 { 00:10:48.069 "name": "BaseBdev4", 00:10:48.069 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:48.069 "is_configured": true, 00:10:48.069 "data_offset": 0, 00:10:48.069 "data_size": 65536 00:10:48.069 } 00:10:48.069 ] 00:10:48.069 }' 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.069 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.637 [2024-11-20 09:23:13.870472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.637 "name": "Existed_Raid", 00:10:48.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.637 "strip_size_kb": 64, 00:10:48.637 "state": "configuring", 00:10:48.637 "raid_level": "raid0", 00:10:48.637 "superblock": false, 00:10:48.637 "num_base_bdevs": 4, 00:10:48.637 "num_base_bdevs_discovered": 3, 00:10:48.637 "num_base_bdevs_operational": 4, 00:10:48.637 "base_bdevs_list": [ 00:10:48.637 { 00:10:48.637 "name": "BaseBdev1", 00:10:48.637 "uuid": "6d66c31a-e05b-4c92-946d-03b697515db2", 00:10:48.637 "is_configured": true, 00:10:48.637 "data_offset": 0, 00:10:48.637 "data_size": 65536 00:10:48.637 }, 00:10:48.637 { 00:10:48.637 "name": null, 00:10:48.637 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:48.637 "is_configured": false, 00:10:48.637 "data_offset": 0, 00:10:48.637 "data_size": 65536 00:10:48.637 }, 00:10:48.637 { 00:10:48.637 "name": "BaseBdev3", 00:10:48.637 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:48.637 "is_configured": true, 00:10:48.637 "data_offset": 0, 00:10:48.637 "data_size": 65536 00:10:48.637 }, 00:10:48.637 { 00:10:48.637 "name": "BaseBdev4", 00:10:48.637 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:48.637 "is_configured": true, 00:10:48.637 "data_offset": 0, 00:10:48.637 "data_size": 65536 00:10:48.637 } 00:10:48.637 ] 00:10:48.637 }' 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.637 09:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.897 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.897 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.897 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.897 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.897 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.155 [2024-11-20 09:23:14.365637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.155 "name": "Existed_Raid", 00:10:49.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.155 "strip_size_kb": 64, 00:10:49.155 "state": "configuring", 00:10:49.155 "raid_level": "raid0", 00:10:49.155 "superblock": false, 00:10:49.155 "num_base_bdevs": 4, 00:10:49.155 "num_base_bdevs_discovered": 2, 00:10:49.155 "num_base_bdevs_operational": 4, 00:10:49.155 "base_bdevs_list": [ 00:10:49.155 { 00:10:49.155 "name": null, 00:10:49.155 "uuid": "6d66c31a-e05b-4c92-946d-03b697515db2", 00:10:49.155 "is_configured": false, 00:10:49.155 "data_offset": 0, 00:10:49.155 "data_size": 65536 00:10:49.155 }, 00:10:49.155 { 00:10:49.155 "name": null, 00:10:49.155 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:49.155 "is_configured": false, 00:10:49.155 "data_offset": 0, 00:10:49.155 "data_size": 65536 00:10:49.155 }, 00:10:49.155 { 00:10:49.155 "name": "BaseBdev3", 00:10:49.155 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:49.155 "is_configured": true, 00:10:49.155 "data_offset": 0, 00:10:49.155 "data_size": 65536 00:10:49.155 }, 00:10:49.155 { 00:10:49.155 "name": "BaseBdev4", 00:10:49.155 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:49.155 "is_configured": true, 00:10:49.155 "data_offset": 0, 00:10:49.155 "data_size": 65536 00:10:49.155 } 00:10:49.155 ] 00:10:49.155 }' 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.155 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.722 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.722 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.722 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.723 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.723 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.723 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:49.723 09:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:49.723 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.723 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.723 [2024-11-20 09:23:14.996409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.723 09:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.723 "name": "Existed_Raid", 00:10:49.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.723 "strip_size_kb": 64, 00:10:49.723 "state": "configuring", 00:10:49.723 "raid_level": "raid0", 00:10:49.723 "superblock": false, 00:10:49.723 "num_base_bdevs": 4, 00:10:49.723 "num_base_bdevs_discovered": 3, 00:10:49.723 "num_base_bdevs_operational": 4, 00:10:49.723 "base_bdevs_list": [ 00:10:49.723 { 00:10:49.723 "name": null, 00:10:49.723 "uuid": "6d66c31a-e05b-4c92-946d-03b697515db2", 00:10:49.723 "is_configured": false, 00:10:49.723 "data_offset": 0, 00:10:49.723 "data_size": 65536 00:10:49.723 }, 00:10:49.723 { 00:10:49.723 "name": "BaseBdev2", 00:10:49.723 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:49.723 "is_configured": true, 00:10:49.723 "data_offset": 0, 00:10:49.723 "data_size": 65536 00:10:49.723 }, 00:10:49.723 { 00:10:49.723 "name": "BaseBdev3", 00:10:49.723 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:49.723 "is_configured": true, 00:10:49.723 "data_offset": 0, 00:10:49.723 "data_size": 65536 00:10:49.723 }, 00:10:49.723 { 00:10:49.723 "name": "BaseBdev4", 00:10:49.723 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:49.723 "is_configured": true, 00:10:49.723 "data_offset": 0, 00:10:49.723 "data_size": 65536 00:10:49.723 } 00:10:49.723 ] 00:10:49.723 }' 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.723 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6d66c31a-e05b-4c92-946d-03b697515db2 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.293 [2024-11-20 09:23:15.600044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:50.293 [2024-11-20 09:23:15.600117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:50.293 [2024-11-20 09:23:15.600126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:50.293 [2024-11-20 09:23:15.600485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:50.293 [2024-11-20 09:23:15.600680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:50.293 [2024-11-20 09:23:15.600697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:50.293 [2024-11-20 09:23:15.601007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.293 NewBaseBdev 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.293 [ 00:10:50.293 { 00:10:50.293 "name": "NewBaseBdev", 00:10:50.293 "aliases": [ 00:10:50.293 "6d66c31a-e05b-4c92-946d-03b697515db2" 00:10:50.293 ], 00:10:50.293 "product_name": "Malloc disk", 00:10:50.293 "block_size": 512, 00:10:50.293 "num_blocks": 65536, 00:10:50.293 "uuid": "6d66c31a-e05b-4c92-946d-03b697515db2", 00:10:50.293 "assigned_rate_limits": { 00:10:50.293 "rw_ios_per_sec": 0, 00:10:50.293 "rw_mbytes_per_sec": 0, 00:10:50.293 "r_mbytes_per_sec": 0, 00:10:50.293 "w_mbytes_per_sec": 0 00:10:50.293 }, 00:10:50.293 "claimed": true, 00:10:50.293 "claim_type": "exclusive_write", 00:10:50.293 "zoned": false, 00:10:50.293 "supported_io_types": { 00:10:50.293 "read": true, 00:10:50.293 "write": true, 00:10:50.293 "unmap": true, 00:10:50.293 "flush": true, 00:10:50.293 "reset": true, 00:10:50.293 "nvme_admin": false, 00:10:50.293 "nvme_io": false, 00:10:50.293 "nvme_io_md": false, 00:10:50.293 "write_zeroes": true, 00:10:50.293 "zcopy": true, 00:10:50.293 "get_zone_info": false, 00:10:50.293 "zone_management": false, 00:10:50.293 "zone_append": false, 00:10:50.293 "compare": false, 00:10:50.293 "compare_and_write": false, 00:10:50.293 "abort": true, 00:10:50.293 "seek_hole": false, 00:10:50.293 "seek_data": false, 00:10:50.293 "copy": true, 00:10:50.293 "nvme_iov_md": false 00:10:50.293 }, 00:10:50.293 "memory_domains": [ 00:10:50.293 { 00:10:50.293 "dma_device_id": "system", 00:10:50.293 "dma_device_type": 1 00:10:50.293 }, 00:10:50.293 { 00:10:50.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.293 "dma_device_type": 2 00:10:50.293 } 00:10:50.293 ], 00:10:50.293 "driver_specific": {} 00:10:50.293 } 00:10:50.293 ] 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.293 "name": "Existed_Raid", 00:10:50.293 "uuid": "c4e3e183-069f-44e4-acd6-11368f0eac50", 00:10:50.293 "strip_size_kb": 64, 00:10:50.293 "state": "online", 00:10:50.293 "raid_level": "raid0", 00:10:50.293 "superblock": false, 00:10:50.293 "num_base_bdevs": 4, 00:10:50.293 "num_base_bdevs_discovered": 4, 00:10:50.293 "num_base_bdevs_operational": 4, 00:10:50.293 "base_bdevs_list": [ 00:10:50.293 { 00:10:50.293 "name": "NewBaseBdev", 00:10:50.293 "uuid": "6d66c31a-e05b-4c92-946d-03b697515db2", 00:10:50.293 "is_configured": true, 00:10:50.293 "data_offset": 0, 00:10:50.293 "data_size": 65536 00:10:50.293 }, 00:10:50.293 { 00:10:50.293 "name": "BaseBdev2", 00:10:50.293 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:50.293 "is_configured": true, 00:10:50.293 "data_offset": 0, 00:10:50.293 "data_size": 65536 00:10:50.293 }, 00:10:50.293 { 00:10:50.293 "name": "BaseBdev3", 00:10:50.293 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:50.293 "is_configured": true, 00:10:50.293 "data_offset": 0, 00:10:50.293 "data_size": 65536 00:10:50.293 }, 00:10:50.293 { 00:10:50.293 "name": "BaseBdev4", 00:10:50.293 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:50.293 "is_configured": true, 00:10:50.293 "data_offset": 0, 00:10:50.293 "data_size": 65536 00:10:50.293 } 00:10:50.293 ] 00:10:50.293 }' 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.293 09:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.863 [2024-11-20 09:23:16.087903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.863 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.863 "name": "Existed_Raid", 00:10:50.863 "aliases": [ 00:10:50.863 "c4e3e183-069f-44e4-acd6-11368f0eac50" 00:10:50.863 ], 00:10:50.863 "product_name": "Raid Volume", 00:10:50.863 "block_size": 512, 00:10:50.863 "num_blocks": 262144, 00:10:50.863 "uuid": "c4e3e183-069f-44e4-acd6-11368f0eac50", 00:10:50.863 "assigned_rate_limits": { 00:10:50.863 "rw_ios_per_sec": 0, 00:10:50.863 "rw_mbytes_per_sec": 0, 00:10:50.863 "r_mbytes_per_sec": 0, 00:10:50.863 "w_mbytes_per_sec": 0 00:10:50.863 }, 00:10:50.863 "claimed": false, 00:10:50.863 "zoned": false, 00:10:50.863 "supported_io_types": { 00:10:50.863 "read": true, 00:10:50.863 "write": true, 00:10:50.863 "unmap": true, 00:10:50.863 "flush": true, 00:10:50.863 "reset": true, 00:10:50.863 "nvme_admin": false, 00:10:50.863 "nvme_io": false, 00:10:50.863 "nvme_io_md": false, 00:10:50.863 "write_zeroes": true, 00:10:50.863 "zcopy": false, 00:10:50.863 "get_zone_info": false, 00:10:50.863 "zone_management": false, 00:10:50.863 "zone_append": false, 00:10:50.863 "compare": false, 00:10:50.863 "compare_and_write": false, 00:10:50.863 "abort": false, 00:10:50.863 "seek_hole": false, 00:10:50.863 "seek_data": false, 00:10:50.863 "copy": false, 00:10:50.863 "nvme_iov_md": false 00:10:50.863 }, 00:10:50.863 "memory_domains": [ 00:10:50.863 { 00:10:50.863 "dma_device_id": "system", 00:10:50.863 "dma_device_type": 1 00:10:50.863 }, 00:10:50.863 { 00:10:50.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.863 "dma_device_type": 2 00:10:50.863 }, 00:10:50.863 { 00:10:50.863 "dma_device_id": "system", 00:10:50.863 "dma_device_type": 1 00:10:50.863 }, 00:10:50.863 { 00:10:50.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.863 "dma_device_type": 2 00:10:50.863 }, 00:10:50.863 { 00:10:50.863 "dma_device_id": "system", 00:10:50.863 "dma_device_type": 1 00:10:50.863 }, 00:10:50.863 { 00:10:50.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.863 "dma_device_type": 2 00:10:50.863 }, 00:10:50.863 { 00:10:50.863 "dma_device_id": "system", 00:10:50.863 "dma_device_type": 1 00:10:50.863 }, 00:10:50.863 { 00:10:50.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.863 "dma_device_type": 2 00:10:50.863 } 00:10:50.863 ], 00:10:50.863 "driver_specific": { 00:10:50.863 "raid": { 00:10:50.863 "uuid": "c4e3e183-069f-44e4-acd6-11368f0eac50", 00:10:50.863 "strip_size_kb": 64, 00:10:50.863 "state": "online", 00:10:50.863 "raid_level": "raid0", 00:10:50.863 "superblock": false, 00:10:50.863 "num_base_bdevs": 4, 00:10:50.863 "num_base_bdevs_discovered": 4, 00:10:50.863 "num_base_bdevs_operational": 4, 00:10:50.863 "base_bdevs_list": [ 00:10:50.863 { 00:10:50.863 "name": "NewBaseBdev", 00:10:50.863 "uuid": "6d66c31a-e05b-4c92-946d-03b697515db2", 00:10:50.863 "is_configured": true, 00:10:50.863 "data_offset": 0, 00:10:50.863 "data_size": 65536 00:10:50.863 }, 00:10:50.863 { 00:10:50.864 "name": "BaseBdev2", 00:10:50.864 "uuid": "d9c3ea61-7aef-4497-8084-5c2cfa9cb29c", 00:10:50.864 "is_configured": true, 00:10:50.864 "data_offset": 0, 00:10:50.864 "data_size": 65536 00:10:50.864 }, 00:10:50.864 { 00:10:50.864 "name": "BaseBdev3", 00:10:50.864 "uuid": "ca3013af-fd3e-424b-838f-ce17c56fe889", 00:10:50.864 "is_configured": true, 00:10:50.864 "data_offset": 0, 00:10:50.864 "data_size": 65536 00:10:50.864 }, 00:10:50.864 { 00:10:50.864 "name": "BaseBdev4", 00:10:50.864 "uuid": "105fbb19-4e7d-4d6c-8ebc-2afcd351c80a", 00:10:50.864 "is_configured": true, 00:10:50.864 "data_offset": 0, 00:10:50.864 "data_size": 65536 00:10:50.864 } 00:10:50.864 ] 00:10:50.864 } 00:10:50.864 } 00:10:50.864 }' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:50.864 BaseBdev2 00:10:50.864 BaseBdev3 00:10:50.864 BaseBdev4' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.864 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.123 [2024-11-20 09:23:16.394931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.123 [2024-11-20 09:23:16.394973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.123 [2024-11-20 09:23:16.395072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.123 [2024-11-20 09:23:16.395155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.123 [2024-11-20 09:23:16.395167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:51.123 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69699 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69699 ']' 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69699 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69699 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69699' 00:10:51.124 killing process with pid 69699 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69699 00:10:51.124 [2024-11-20 09:23:16.438340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.124 09:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69699 00:10:51.690 [2024-11-20 09:23:16.911247] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:53.069 00:10:53.069 real 0m12.469s 00:10:53.069 user 0m19.725s 00:10:53.069 sys 0m2.153s 00:10:53.069 ************************************ 00:10:53.069 END TEST raid_state_function_test 00:10:53.069 ************************************ 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.069 09:23:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:53.069 09:23:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:53.069 09:23:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.069 09:23:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.069 ************************************ 00:10:53.069 START TEST raid_state_function_test_sb 00:10:53.069 ************************************ 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70383 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70383' 00:10:53.069 Process raid pid: 70383 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70383 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70383 ']' 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.069 09:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.069 [2024-11-20 09:23:18.402818] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:10:53.069 [2024-11-20 09:23:18.403073] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.329 [2024-11-20 09:23:18.580589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.329 [2024-11-20 09:23:18.714555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.588 [2024-11-20 09:23:18.959090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.588 [2024-11-20 09:23:18.959147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.157 [2024-11-20 09:23:19.327584] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.157 [2024-11-20 09:23:19.327657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.157 [2024-11-20 09:23:19.327670] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.157 [2024-11-20 09:23:19.327681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.157 [2024-11-20 09:23:19.327688] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.157 [2024-11-20 09:23:19.327707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.157 [2024-11-20 09:23:19.327715] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.157 [2024-11-20 09:23:19.327724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.157 "name": "Existed_Raid", 00:10:54.157 "uuid": "fc915f32-63c8-4707-ab88-75ac92ab4ba7", 00:10:54.157 "strip_size_kb": 64, 00:10:54.157 "state": "configuring", 00:10:54.157 "raid_level": "raid0", 00:10:54.157 "superblock": true, 00:10:54.157 "num_base_bdevs": 4, 00:10:54.157 "num_base_bdevs_discovered": 0, 00:10:54.157 "num_base_bdevs_operational": 4, 00:10:54.157 "base_bdevs_list": [ 00:10:54.157 { 00:10:54.157 "name": "BaseBdev1", 00:10:54.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.157 "is_configured": false, 00:10:54.157 "data_offset": 0, 00:10:54.157 "data_size": 0 00:10:54.157 }, 00:10:54.157 { 00:10:54.157 "name": "BaseBdev2", 00:10:54.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.157 "is_configured": false, 00:10:54.157 "data_offset": 0, 00:10:54.157 "data_size": 0 00:10:54.157 }, 00:10:54.157 { 00:10:54.157 "name": "BaseBdev3", 00:10:54.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.157 "is_configured": false, 00:10:54.157 "data_offset": 0, 00:10:54.157 "data_size": 0 00:10:54.157 }, 00:10:54.157 { 00:10:54.157 "name": "BaseBdev4", 00:10:54.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.157 "is_configured": false, 00:10:54.157 "data_offset": 0, 00:10:54.157 "data_size": 0 00:10:54.157 } 00:10:54.157 ] 00:10:54.157 }' 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.157 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.417 [2024-11-20 09:23:19.794687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.417 [2024-11-20 09:23:19.794732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.417 [2024-11-20 09:23:19.806685] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.417 [2024-11-20 09:23:19.806733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.417 [2024-11-20 09:23:19.806745] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.417 [2024-11-20 09:23:19.806755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.417 [2024-11-20 09:23:19.806762] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.417 [2024-11-20 09:23:19.806772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.417 [2024-11-20 09:23:19.806779] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.417 [2024-11-20 09:23:19.806789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.417 [2024-11-20 09:23:19.859044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.417 BaseBdev1 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.417 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.677 [ 00:10:54.677 { 00:10:54.677 "name": "BaseBdev1", 00:10:54.677 "aliases": [ 00:10:54.677 "8d77f036-cbf9-4694-ad19-98670c8388e7" 00:10:54.677 ], 00:10:54.677 "product_name": "Malloc disk", 00:10:54.677 "block_size": 512, 00:10:54.677 "num_blocks": 65536, 00:10:54.677 "uuid": "8d77f036-cbf9-4694-ad19-98670c8388e7", 00:10:54.677 "assigned_rate_limits": { 00:10:54.677 "rw_ios_per_sec": 0, 00:10:54.677 "rw_mbytes_per_sec": 0, 00:10:54.677 "r_mbytes_per_sec": 0, 00:10:54.677 "w_mbytes_per_sec": 0 00:10:54.677 }, 00:10:54.677 "claimed": true, 00:10:54.677 "claim_type": "exclusive_write", 00:10:54.677 "zoned": false, 00:10:54.677 "supported_io_types": { 00:10:54.677 "read": true, 00:10:54.677 "write": true, 00:10:54.677 "unmap": true, 00:10:54.677 "flush": true, 00:10:54.677 "reset": true, 00:10:54.677 "nvme_admin": false, 00:10:54.677 "nvme_io": false, 00:10:54.677 "nvme_io_md": false, 00:10:54.677 "write_zeroes": true, 00:10:54.677 "zcopy": true, 00:10:54.677 "get_zone_info": false, 00:10:54.677 "zone_management": false, 00:10:54.677 "zone_append": false, 00:10:54.677 "compare": false, 00:10:54.677 "compare_and_write": false, 00:10:54.677 "abort": true, 00:10:54.677 "seek_hole": false, 00:10:54.677 "seek_data": false, 00:10:54.677 "copy": true, 00:10:54.677 "nvme_iov_md": false 00:10:54.677 }, 00:10:54.677 "memory_domains": [ 00:10:54.677 { 00:10:54.677 "dma_device_id": "system", 00:10:54.677 "dma_device_type": 1 00:10:54.677 }, 00:10:54.677 { 00:10:54.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.677 "dma_device_type": 2 00:10:54.677 } 00:10:54.677 ], 00:10:54.677 "driver_specific": {} 00:10:54.677 } 00:10:54.677 ] 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.677 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.677 "name": "Existed_Raid", 00:10:54.677 "uuid": "1aa69b75-b6ec-4e2a-9249-10ad9374404a", 00:10:54.677 "strip_size_kb": 64, 00:10:54.677 "state": "configuring", 00:10:54.677 "raid_level": "raid0", 00:10:54.677 "superblock": true, 00:10:54.677 "num_base_bdevs": 4, 00:10:54.677 "num_base_bdevs_discovered": 1, 00:10:54.677 "num_base_bdevs_operational": 4, 00:10:54.677 "base_bdevs_list": [ 00:10:54.677 { 00:10:54.677 "name": "BaseBdev1", 00:10:54.678 "uuid": "8d77f036-cbf9-4694-ad19-98670c8388e7", 00:10:54.678 "is_configured": true, 00:10:54.678 "data_offset": 2048, 00:10:54.678 "data_size": 63488 00:10:54.678 }, 00:10:54.678 { 00:10:54.678 "name": "BaseBdev2", 00:10:54.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.678 "is_configured": false, 00:10:54.678 "data_offset": 0, 00:10:54.678 "data_size": 0 00:10:54.678 }, 00:10:54.678 { 00:10:54.678 "name": "BaseBdev3", 00:10:54.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.678 "is_configured": false, 00:10:54.678 "data_offset": 0, 00:10:54.678 "data_size": 0 00:10:54.678 }, 00:10:54.678 { 00:10:54.678 "name": "BaseBdev4", 00:10:54.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.678 "is_configured": false, 00:10:54.678 "data_offset": 0, 00:10:54.678 "data_size": 0 00:10:54.678 } 00:10:54.678 ] 00:10:54.678 }' 00:10:54.678 09:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.678 09:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.949 [2024-11-20 09:23:20.366306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.949 [2024-11-20 09:23:20.366439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.949 [2024-11-20 09:23:20.378348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.949 [2024-11-20 09:23:20.380513] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.949 [2024-11-20 09:23:20.380609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.949 [2024-11-20 09:23:20.380626] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.949 [2024-11-20 09:23:20.380639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.949 [2024-11-20 09:23:20.380648] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.949 [2024-11-20 09:23:20.380659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.949 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.227 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.227 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.227 "name": "Existed_Raid", 00:10:55.227 "uuid": "d2e0a1a2-f743-4fd3-b12d-cffe8909a67c", 00:10:55.227 "strip_size_kb": 64, 00:10:55.227 "state": "configuring", 00:10:55.227 "raid_level": "raid0", 00:10:55.227 "superblock": true, 00:10:55.227 "num_base_bdevs": 4, 00:10:55.227 "num_base_bdevs_discovered": 1, 00:10:55.227 "num_base_bdevs_operational": 4, 00:10:55.227 "base_bdevs_list": [ 00:10:55.227 { 00:10:55.227 "name": "BaseBdev1", 00:10:55.227 "uuid": "8d77f036-cbf9-4694-ad19-98670c8388e7", 00:10:55.227 "is_configured": true, 00:10:55.227 "data_offset": 2048, 00:10:55.227 "data_size": 63488 00:10:55.227 }, 00:10:55.227 { 00:10:55.227 "name": "BaseBdev2", 00:10:55.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.227 "is_configured": false, 00:10:55.227 "data_offset": 0, 00:10:55.227 "data_size": 0 00:10:55.227 }, 00:10:55.227 { 00:10:55.227 "name": "BaseBdev3", 00:10:55.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.227 "is_configured": false, 00:10:55.227 "data_offset": 0, 00:10:55.227 "data_size": 0 00:10:55.227 }, 00:10:55.227 { 00:10:55.227 "name": "BaseBdev4", 00:10:55.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.227 "is_configured": false, 00:10:55.227 "data_offset": 0, 00:10:55.227 "data_size": 0 00:10:55.227 } 00:10:55.227 ] 00:10:55.227 }' 00:10:55.227 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.227 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.487 [2024-11-20 09:23:20.867524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.487 BaseBdev2 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.487 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.487 [ 00:10:55.487 { 00:10:55.487 "name": "BaseBdev2", 00:10:55.487 "aliases": [ 00:10:55.487 "a9fff84d-882a-4751-9ec7-e26f36bb2a8e" 00:10:55.487 ], 00:10:55.487 "product_name": "Malloc disk", 00:10:55.487 "block_size": 512, 00:10:55.487 "num_blocks": 65536, 00:10:55.487 "uuid": "a9fff84d-882a-4751-9ec7-e26f36bb2a8e", 00:10:55.487 "assigned_rate_limits": { 00:10:55.487 "rw_ios_per_sec": 0, 00:10:55.487 "rw_mbytes_per_sec": 0, 00:10:55.487 "r_mbytes_per_sec": 0, 00:10:55.487 "w_mbytes_per_sec": 0 00:10:55.487 }, 00:10:55.487 "claimed": true, 00:10:55.487 "claim_type": "exclusive_write", 00:10:55.487 "zoned": false, 00:10:55.487 "supported_io_types": { 00:10:55.487 "read": true, 00:10:55.487 "write": true, 00:10:55.487 "unmap": true, 00:10:55.487 "flush": true, 00:10:55.487 "reset": true, 00:10:55.487 "nvme_admin": false, 00:10:55.488 "nvme_io": false, 00:10:55.488 "nvme_io_md": false, 00:10:55.488 "write_zeroes": true, 00:10:55.488 "zcopy": true, 00:10:55.488 "get_zone_info": false, 00:10:55.488 "zone_management": false, 00:10:55.488 "zone_append": false, 00:10:55.488 "compare": false, 00:10:55.488 "compare_and_write": false, 00:10:55.488 "abort": true, 00:10:55.488 "seek_hole": false, 00:10:55.488 "seek_data": false, 00:10:55.488 "copy": true, 00:10:55.488 "nvme_iov_md": false 00:10:55.488 }, 00:10:55.488 "memory_domains": [ 00:10:55.488 { 00:10:55.488 "dma_device_id": "system", 00:10:55.488 "dma_device_type": 1 00:10:55.488 }, 00:10:55.488 { 00:10:55.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.488 "dma_device_type": 2 00:10:55.488 } 00:10:55.488 ], 00:10:55.488 "driver_specific": {} 00:10:55.488 } 00:10:55.488 ] 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.488 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.747 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.747 "name": "Existed_Raid", 00:10:55.747 "uuid": "d2e0a1a2-f743-4fd3-b12d-cffe8909a67c", 00:10:55.747 "strip_size_kb": 64, 00:10:55.747 "state": "configuring", 00:10:55.747 "raid_level": "raid0", 00:10:55.747 "superblock": true, 00:10:55.747 "num_base_bdevs": 4, 00:10:55.747 "num_base_bdevs_discovered": 2, 00:10:55.747 "num_base_bdevs_operational": 4, 00:10:55.747 "base_bdevs_list": [ 00:10:55.747 { 00:10:55.747 "name": "BaseBdev1", 00:10:55.747 "uuid": "8d77f036-cbf9-4694-ad19-98670c8388e7", 00:10:55.747 "is_configured": true, 00:10:55.747 "data_offset": 2048, 00:10:55.747 "data_size": 63488 00:10:55.747 }, 00:10:55.747 { 00:10:55.747 "name": "BaseBdev2", 00:10:55.747 "uuid": "a9fff84d-882a-4751-9ec7-e26f36bb2a8e", 00:10:55.747 "is_configured": true, 00:10:55.747 "data_offset": 2048, 00:10:55.747 "data_size": 63488 00:10:55.747 }, 00:10:55.747 { 00:10:55.747 "name": "BaseBdev3", 00:10:55.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.747 "is_configured": false, 00:10:55.747 "data_offset": 0, 00:10:55.747 "data_size": 0 00:10:55.747 }, 00:10:55.747 { 00:10:55.747 "name": "BaseBdev4", 00:10:55.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.747 "is_configured": false, 00:10:55.747 "data_offset": 0, 00:10:55.747 "data_size": 0 00:10:55.747 } 00:10:55.747 ] 00:10:55.747 }' 00:10:55.747 09:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.747 09:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.011 [2024-11-20 09:23:21.376483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.011 BaseBdev3 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.011 [ 00:10:56.011 { 00:10:56.011 "name": "BaseBdev3", 00:10:56.011 "aliases": [ 00:10:56.011 "664c5b57-326c-4cdf-b9ea-bea22e611721" 00:10:56.011 ], 00:10:56.011 "product_name": "Malloc disk", 00:10:56.011 "block_size": 512, 00:10:56.011 "num_blocks": 65536, 00:10:56.011 "uuid": "664c5b57-326c-4cdf-b9ea-bea22e611721", 00:10:56.011 "assigned_rate_limits": { 00:10:56.011 "rw_ios_per_sec": 0, 00:10:56.011 "rw_mbytes_per_sec": 0, 00:10:56.011 "r_mbytes_per_sec": 0, 00:10:56.011 "w_mbytes_per_sec": 0 00:10:56.011 }, 00:10:56.011 "claimed": true, 00:10:56.011 "claim_type": "exclusive_write", 00:10:56.011 "zoned": false, 00:10:56.011 "supported_io_types": { 00:10:56.011 "read": true, 00:10:56.011 "write": true, 00:10:56.011 "unmap": true, 00:10:56.011 "flush": true, 00:10:56.011 "reset": true, 00:10:56.011 "nvme_admin": false, 00:10:56.011 "nvme_io": false, 00:10:56.011 "nvme_io_md": false, 00:10:56.011 "write_zeroes": true, 00:10:56.011 "zcopy": true, 00:10:56.011 "get_zone_info": false, 00:10:56.011 "zone_management": false, 00:10:56.011 "zone_append": false, 00:10:56.011 "compare": false, 00:10:56.011 "compare_and_write": false, 00:10:56.011 "abort": true, 00:10:56.011 "seek_hole": false, 00:10:56.011 "seek_data": false, 00:10:56.011 "copy": true, 00:10:56.011 "nvme_iov_md": false 00:10:56.011 }, 00:10:56.011 "memory_domains": [ 00:10:56.011 { 00:10:56.011 "dma_device_id": "system", 00:10:56.011 "dma_device_type": 1 00:10:56.011 }, 00:10:56.011 { 00:10:56.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.011 "dma_device_type": 2 00:10:56.011 } 00:10:56.011 ], 00:10:56.011 "driver_specific": {} 00:10:56.011 } 00:10:56.011 ] 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.011 "name": "Existed_Raid", 00:10:56.011 "uuid": "d2e0a1a2-f743-4fd3-b12d-cffe8909a67c", 00:10:56.011 "strip_size_kb": 64, 00:10:56.011 "state": "configuring", 00:10:56.011 "raid_level": "raid0", 00:10:56.011 "superblock": true, 00:10:56.011 "num_base_bdevs": 4, 00:10:56.011 "num_base_bdevs_discovered": 3, 00:10:56.011 "num_base_bdevs_operational": 4, 00:10:56.011 "base_bdevs_list": [ 00:10:56.011 { 00:10:56.011 "name": "BaseBdev1", 00:10:56.011 "uuid": "8d77f036-cbf9-4694-ad19-98670c8388e7", 00:10:56.011 "is_configured": true, 00:10:56.011 "data_offset": 2048, 00:10:56.011 "data_size": 63488 00:10:56.011 }, 00:10:56.011 { 00:10:56.011 "name": "BaseBdev2", 00:10:56.011 "uuid": "a9fff84d-882a-4751-9ec7-e26f36bb2a8e", 00:10:56.011 "is_configured": true, 00:10:56.011 "data_offset": 2048, 00:10:56.011 "data_size": 63488 00:10:56.011 }, 00:10:56.011 { 00:10:56.011 "name": "BaseBdev3", 00:10:56.011 "uuid": "664c5b57-326c-4cdf-b9ea-bea22e611721", 00:10:56.011 "is_configured": true, 00:10:56.011 "data_offset": 2048, 00:10:56.011 "data_size": 63488 00:10:56.011 }, 00:10:56.011 { 00:10:56.011 "name": "BaseBdev4", 00:10:56.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.011 "is_configured": false, 00:10:56.011 "data_offset": 0, 00:10:56.011 "data_size": 0 00:10:56.011 } 00:10:56.011 ] 00:10:56.011 }' 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.011 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.580 [2024-11-20 09:23:21.913967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:56.580 [2024-11-20 09:23:21.914287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:56.580 [2024-11-20 09:23:21.914304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:56.580 [2024-11-20 09:23:21.914657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:56.580 [2024-11-20 09:23:21.914842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:56.580 [2024-11-20 09:23:21.914858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:56.580 BaseBdev4 00:10:56.580 [2024-11-20 09:23:21.915033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.580 [ 00:10:56.580 { 00:10:56.580 "name": "BaseBdev4", 00:10:56.580 "aliases": [ 00:10:56.580 "fe36ba65-493d-41ad-9eed-bf7241647c23" 00:10:56.580 ], 00:10:56.580 "product_name": "Malloc disk", 00:10:56.580 "block_size": 512, 00:10:56.580 "num_blocks": 65536, 00:10:56.580 "uuid": "fe36ba65-493d-41ad-9eed-bf7241647c23", 00:10:56.580 "assigned_rate_limits": { 00:10:56.580 "rw_ios_per_sec": 0, 00:10:56.580 "rw_mbytes_per_sec": 0, 00:10:56.580 "r_mbytes_per_sec": 0, 00:10:56.580 "w_mbytes_per_sec": 0 00:10:56.580 }, 00:10:56.580 "claimed": true, 00:10:56.580 "claim_type": "exclusive_write", 00:10:56.580 "zoned": false, 00:10:56.580 "supported_io_types": { 00:10:56.580 "read": true, 00:10:56.580 "write": true, 00:10:56.580 "unmap": true, 00:10:56.580 "flush": true, 00:10:56.580 "reset": true, 00:10:56.580 "nvme_admin": false, 00:10:56.580 "nvme_io": false, 00:10:56.580 "nvme_io_md": false, 00:10:56.580 "write_zeroes": true, 00:10:56.580 "zcopy": true, 00:10:56.580 "get_zone_info": false, 00:10:56.580 "zone_management": false, 00:10:56.580 "zone_append": false, 00:10:56.580 "compare": false, 00:10:56.580 "compare_and_write": false, 00:10:56.580 "abort": true, 00:10:56.580 "seek_hole": false, 00:10:56.580 "seek_data": false, 00:10:56.580 "copy": true, 00:10:56.580 "nvme_iov_md": false 00:10:56.580 }, 00:10:56.580 "memory_domains": [ 00:10:56.580 { 00:10:56.580 "dma_device_id": "system", 00:10:56.580 "dma_device_type": 1 00:10:56.580 }, 00:10:56.580 { 00:10:56.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.580 "dma_device_type": 2 00:10:56.580 } 00:10:56.580 ], 00:10:56.580 "driver_specific": {} 00:10:56.580 } 00:10:56.580 ] 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.580 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.581 09:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.581 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.581 "name": "Existed_Raid", 00:10:56.581 "uuid": "d2e0a1a2-f743-4fd3-b12d-cffe8909a67c", 00:10:56.581 "strip_size_kb": 64, 00:10:56.581 "state": "online", 00:10:56.581 "raid_level": "raid0", 00:10:56.581 "superblock": true, 00:10:56.581 "num_base_bdevs": 4, 00:10:56.581 "num_base_bdevs_discovered": 4, 00:10:56.581 "num_base_bdevs_operational": 4, 00:10:56.581 "base_bdevs_list": [ 00:10:56.581 { 00:10:56.581 "name": "BaseBdev1", 00:10:56.581 "uuid": "8d77f036-cbf9-4694-ad19-98670c8388e7", 00:10:56.581 "is_configured": true, 00:10:56.581 "data_offset": 2048, 00:10:56.581 "data_size": 63488 00:10:56.581 }, 00:10:56.581 { 00:10:56.581 "name": "BaseBdev2", 00:10:56.581 "uuid": "a9fff84d-882a-4751-9ec7-e26f36bb2a8e", 00:10:56.581 "is_configured": true, 00:10:56.581 "data_offset": 2048, 00:10:56.581 "data_size": 63488 00:10:56.581 }, 00:10:56.581 { 00:10:56.581 "name": "BaseBdev3", 00:10:56.581 "uuid": "664c5b57-326c-4cdf-b9ea-bea22e611721", 00:10:56.581 "is_configured": true, 00:10:56.581 "data_offset": 2048, 00:10:56.581 "data_size": 63488 00:10:56.581 }, 00:10:56.581 { 00:10:56.581 "name": "BaseBdev4", 00:10:56.581 "uuid": "fe36ba65-493d-41ad-9eed-bf7241647c23", 00:10:56.581 "is_configured": true, 00:10:56.581 "data_offset": 2048, 00:10:56.581 "data_size": 63488 00:10:56.581 } 00:10:56.581 ] 00:10:56.581 }' 00:10:56.581 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.581 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.151 [2024-11-20 09:23:22.417669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.151 "name": "Existed_Raid", 00:10:57.151 "aliases": [ 00:10:57.151 "d2e0a1a2-f743-4fd3-b12d-cffe8909a67c" 00:10:57.151 ], 00:10:57.151 "product_name": "Raid Volume", 00:10:57.151 "block_size": 512, 00:10:57.151 "num_blocks": 253952, 00:10:57.151 "uuid": "d2e0a1a2-f743-4fd3-b12d-cffe8909a67c", 00:10:57.151 "assigned_rate_limits": { 00:10:57.151 "rw_ios_per_sec": 0, 00:10:57.151 "rw_mbytes_per_sec": 0, 00:10:57.151 "r_mbytes_per_sec": 0, 00:10:57.151 "w_mbytes_per_sec": 0 00:10:57.151 }, 00:10:57.151 "claimed": false, 00:10:57.151 "zoned": false, 00:10:57.151 "supported_io_types": { 00:10:57.151 "read": true, 00:10:57.151 "write": true, 00:10:57.151 "unmap": true, 00:10:57.151 "flush": true, 00:10:57.151 "reset": true, 00:10:57.151 "nvme_admin": false, 00:10:57.151 "nvme_io": false, 00:10:57.151 "nvme_io_md": false, 00:10:57.151 "write_zeroes": true, 00:10:57.151 "zcopy": false, 00:10:57.151 "get_zone_info": false, 00:10:57.151 "zone_management": false, 00:10:57.151 "zone_append": false, 00:10:57.151 "compare": false, 00:10:57.151 "compare_and_write": false, 00:10:57.151 "abort": false, 00:10:57.151 "seek_hole": false, 00:10:57.151 "seek_data": false, 00:10:57.151 "copy": false, 00:10:57.151 "nvme_iov_md": false 00:10:57.151 }, 00:10:57.151 "memory_domains": [ 00:10:57.151 { 00:10:57.151 "dma_device_id": "system", 00:10:57.151 "dma_device_type": 1 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.151 "dma_device_type": 2 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "dma_device_id": "system", 00:10:57.151 "dma_device_type": 1 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.151 "dma_device_type": 2 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "dma_device_id": "system", 00:10:57.151 "dma_device_type": 1 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.151 "dma_device_type": 2 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "dma_device_id": "system", 00:10:57.151 "dma_device_type": 1 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.151 "dma_device_type": 2 00:10:57.151 } 00:10:57.151 ], 00:10:57.151 "driver_specific": { 00:10:57.151 "raid": { 00:10:57.151 "uuid": "d2e0a1a2-f743-4fd3-b12d-cffe8909a67c", 00:10:57.151 "strip_size_kb": 64, 00:10:57.151 "state": "online", 00:10:57.151 "raid_level": "raid0", 00:10:57.151 "superblock": true, 00:10:57.151 "num_base_bdevs": 4, 00:10:57.151 "num_base_bdevs_discovered": 4, 00:10:57.151 "num_base_bdevs_operational": 4, 00:10:57.151 "base_bdevs_list": [ 00:10:57.151 { 00:10:57.151 "name": "BaseBdev1", 00:10:57.151 "uuid": "8d77f036-cbf9-4694-ad19-98670c8388e7", 00:10:57.151 "is_configured": true, 00:10:57.151 "data_offset": 2048, 00:10:57.151 "data_size": 63488 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "name": "BaseBdev2", 00:10:57.151 "uuid": "a9fff84d-882a-4751-9ec7-e26f36bb2a8e", 00:10:57.151 "is_configured": true, 00:10:57.151 "data_offset": 2048, 00:10:57.151 "data_size": 63488 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "name": "BaseBdev3", 00:10:57.151 "uuid": "664c5b57-326c-4cdf-b9ea-bea22e611721", 00:10:57.151 "is_configured": true, 00:10:57.151 "data_offset": 2048, 00:10:57.151 "data_size": 63488 00:10:57.151 }, 00:10:57.151 { 00:10:57.151 "name": "BaseBdev4", 00:10:57.151 "uuid": "fe36ba65-493d-41ad-9eed-bf7241647c23", 00:10:57.151 "is_configured": true, 00:10:57.151 "data_offset": 2048, 00:10:57.151 "data_size": 63488 00:10:57.151 } 00:10:57.151 ] 00:10:57.151 } 00:10:57.151 } 00:10:57.151 }' 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:57.151 BaseBdev2 00:10:57.151 BaseBdev3 00:10:57.151 BaseBdev4' 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.151 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.412 [2024-11-20 09:23:22.732748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.412 [2024-11-20 09:23:22.732790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.412 [2024-11-20 09:23:22.732846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.412 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.673 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.673 "name": "Existed_Raid", 00:10:57.673 "uuid": "d2e0a1a2-f743-4fd3-b12d-cffe8909a67c", 00:10:57.673 "strip_size_kb": 64, 00:10:57.673 "state": "offline", 00:10:57.673 "raid_level": "raid0", 00:10:57.673 "superblock": true, 00:10:57.673 "num_base_bdevs": 4, 00:10:57.673 "num_base_bdevs_discovered": 3, 00:10:57.673 "num_base_bdevs_operational": 3, 00:10:57.673 "base_bdevs_list": [ 00:10:57.673 { 00:10:57.673 "name": null, 00:10:57.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.673 "is_configured": false, 00:10:57.673 "data_offset": 0, 00:10:57.673 "data_size": 63488 00:10:57.673 }, 00:10:57.673 { 00:10:57.673 "name": "BaseBdev2", 00:10:57.673 "uuid": "a9fff84d-882a-4751-9ec7-e26f36bb2a8e", 00:10:57.673 "is_configured": true, 00:10:57.673 "data_offset": 2048, 00:10:57.673 "data_size": 63488 00:10:57.673 }, 00:10:57.673 { 00:10:57.673 "name": "BaseBdev3", 00:10:57.673 "uuid": "664c5b57-326c-4cdf-b9ea-bea22e611721", 00:10:57.673 "is_configured": true, 00:10:57.673 "data_offset": 2048, 00:10:57.673 "data_size": 63488 00:10:57.673 }, 00:10:57.673 { 00:10:57.673 "name": "BaseBdev4", 00:10:57.673 "uuid": "fe36ba65-493d-41ad-9eed-bf7241647c23", 00:10:57.673 "is_configured": true, 00:10:57.673 "data_offset": 2048, 00:10:57.673 "data_size": 63488 00:10:57.673 } 00:10:57.673 ] 00:10:57.673 }' 00:10:57.673 09:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.673 09:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.932 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:57.932 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.932 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.932 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.932 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.932 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.932 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.194 [2024-11-20 09:23:23.412658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.194 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.194 [2024-11-20 09:23:23.582419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.454 [2024-11-20 09:23:23.750333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:58.454 [2024-11-20 09:23:23.750400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.454 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 BaseBdev2 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 [ 00:10:58.713 { 00:10:58.713 "name": "BaseBdev2", 00:10:58.713 "aliases": [ 00:10:58.713 "f0e85caf-a6fe-47a8-9874-bf59f98846a7" 00:10:58.713 ], 00:10:58.713 "product_name": "Malloc disk", 00:10:58.713 "block_size": 512, 00:10:58.713 "num_blocks": 65536, 00:10:58.713 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:10:58.713 "assigned_rate_limits": { 00:10:58.713 "rw_ios_per_sec": 0, 00:10:58.713 "rw_mbytes_per_sec": 0, 00:10:58.713 "r_mbytes_per_sec": 0, 00:10:58.713 "w_mbytes_per_sec": 0 00:10:58.713 }, 00:10:58.713 "claimed": false, 00:10:58.713 "zoned": false, 00:10:58.713 "supported_io_types": { 00:10:58.713 "read": true, 00:10:58.713 "write": true, 00:10:58.713 "unmap": true, 00:10:58.713 "flush": true, 00:10:58.713 "reset": true, 00:10:58.713 "nvme_admin": false, 00:10:58.713 "nvme_io": false, 00:10:58.713 "nvme_io_md": false, 00:10:58.713 "write_zeroes": true, 00:10:58.713 "zcopy": true, 00:10:58.713 "get_zone_info": false, 00:10:58.713 "zone_management": false, 00:10:58.713 "zone_append": false, 00:10:58.713 "compare": false, 00:10:58.713 "compare_and_write": false, 00:10:58.713 "abort": true, 00:10:58.713 "seek_hole": false, 00:10:58.713 "seek_data": false, 00:10:58.713 "copy": true, 00:10:58.713 "nvme_iov_md": false 00:10:58.713 }, 00:10:58.713 "memory_domains": [ 00:10:58.713 { 00:10:58.713 "dma_device_id": "system", 00:10:58.713 "dma_device_type": 1 00:10:58.713 }, 00:10:58.713 { 00:10:58.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.713 "dma_device_type": 2 00:10:58.713 } 00:10:58.713 ], 00:10:58.713 "driver_specific": {} 00:10:58.713 } 00:10:58.713 ] 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 BaseBdev3 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 [ 00:10:58.713 { 00:10:58.713 "name": "BaseBdev3", 00:10:58.713 "aliases": [ 00:10:58.713 "b04cf066-1315-41f4-b14e-7e04173237c3" 00:10:58.713 ], 00:10:58.713 "product_name": "Malloc disk", 00:10:58.713 "block_size": 512, 00:10:58.713 "num_blocks": 65536, 00:10:58.713 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:10:58.713 "assigned_rate_limits": { 00:10:58.713 "rw_ios_per_sec": 0, 00:10:58.713 "rw_mbytes_per_sec": 0, 00:10:58.713 "r_mbytes_per_sec": 0, 00:10:58.713 "w_mbytes_per_sec": 0 00:10:58.713 }, 00:10:58.713 "claimed": false, 00:10:58.713 "zoned": false, 00:10:58.713 "supported_io_types": { 00:10:58.713 "read": true, 00:10:58.713 "write": true, 00:10:58.713 "unmap": true, 00:10:58.713 "flush": true, 00:10:58.713 "reset": true, 00:10:58.713 "nvme_admin": false, 00:10:58.713 "nvme_io": false, 00:10:58.713 "nvme_io_md": false, 00:10:58.713 "write_zeroes": true, 00:10:58.713 "zcopy": true, 00:10:58.713 "get_zone_info": false, 00:10:58.713 "zone_management": false, 00:10:58.713 "zone_append": false, 00:10:58.713 "compare": false, 00:10:58.713 "compare_and_write": false, 00:10:58.713 "abort": true, 00:10:58.713 "seek_hole": false, 00:10:58.713 "seek_data": false, 00:10:58.713 "copy": true, 00:10:58.713 "nvme_iov_md": false 00:10:58.713 }, 00:10:58.713 "memory_domains": [ 00:10:58.713 { 00:10:58.713 "dma_device_id": "system", 00:10:58.713 "dma_device_type": 1 00:10:58.713 }, 00:10:58.713 { 00:10:58.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.713 "dma_device_type": 2 00:10:58.713 } 00:10:58.713 ], 00:10:58.713 "driver_specific": {} 00:10:58.713 } 00:10:58.713 ] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 BaseBdev4 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 [ 00:10:58.713 { 00:10:58.713 "name": "BaseBdev4", 00:10:58.713 "aliases": [ 00:10:58.713 "67f9cdda-a68c-49a1-8d83-9f6cf19026d1" 00:10:58.713 ], 00:10:58.713 "product_name": "Malloc disk", 00:10:58.713 "block_size": 512, 00:10:58.713 "num_blocks": 65536, 00:10:58.713 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:10:58.713 "assigned_rate_limits": { 00:10:58.713 "rw_ios_per_sec": 0, 00:10:58.713 "rw_mbytes_per_sec": 0, 00:10:58.713 "r_mbytes_per_sec": 0, 00:10:58.713 "w_mbytes_per_sec": 0 00:10:58.713 }, 00:10:58.713 "claimed": false, 00:10:58.713 "zoned": false, 00:10:58.713 "supported_io_types": { 00:10:58.713 "read": true, 00:10:58.713 "write": true, 00:10:58.713 "unmap": true, 00:10:58.713 "flush": true, 00:10:58.713 "reset": true, 00:10:58.713 "nvme_admin": false, 00:10:58.713 "nvme_io": false, 00:10:58.713 "nvme_io_md": false, 00:10:58.713 "write_zeroes": true, 00:10:58.713 "zcopy": true, 00:10:58.713 "get_zone_info": false, 00:10:58.713 "zone_management": false, 00:10:58.713 "zone_append": false, 00:10:58.713 "compare": false, 00:10:58.713 "compare_and_write": false, 00:10:58.713 "abort": true, 00:10:58.713 "seek_hole": false, 00:10:58.713 "seek_data": false, 00:10:58.713 "copy": true, 00:10:58.713 "nvme_iov_md": false 00:10:58.713 }, 00:10:58.713 "memory_domains": [ 00:10:58.713 { 00:10:58.713 "dma_device_id": "system", 00:10:58.713 "dma_device_type": 1 00:10:58.713 }, 00:10:58.713 { 00:10:58.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.713 "dma_device_type": 2 00:10:58.713 } 00:10:58.713 ], 00:10:58.713 "driver_specific": {} 00:10:58.713 } 00:10:58.713 ] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.713 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.713 [2024-11-20 09:23:24.164648] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.713 [2024-11-20 09:23:24.164701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.713 [2024-11-20 09:23:24.164733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.973 [2024-11-20 09:23:24.166967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.973 [2024-11-20 09:23:24.167040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.973 "name": "Existed_Raid", 00:10:58.973 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:10:58.973 "strip_size_kb": 64, 00:10:58.973 "state": "configuring", 00:10:58.973 "raid_level": "raid0", 00:10:58.973 "superblock": true, 00:10:58.973 "num_base_bdevs": 4, 00:10:58.973 "num_base_bdevs_discovered": 3, 00:10:58.973 "num_base_bdevs_operational": 4, 00:10:58.973 "base_bdevs_list": [ 00:10:58.973 { 00:10:58.973 "name": "BaseBdev1", 00:10:58.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.973 "is_configured": false, 00:10:58.973 "data_offset": 0, 00:10:58.973 "data_size": 0 00:10:58.973 }, 00:10:58.973 { 00:10:58.973 "name": "BaseBdev2", 00:10:58.973 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:10:58.973 "is_configured": true, 00:10:58.973 "data_offset": 2048, 00:10:58.973 "data_size": 63488 00:10:58.973 }, 00:10:58.973 { 00:10:58.973 "name": "BaseBdev3", 00:10:58.973 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:10:58.973 "is_configured": true, 00:10:58.973 "data_offset": 2048, 00:10:58.973 "data_size": 63488 00:10:58.973 }, 00:10:58.973 { 00:10:58.973 "name": "BaseBdev4", 00:10:58.973 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:10:58.973 "is_configured": true, 00:10:58.973 "data_offset": 2048, 00:10:58.973 "data_size": 63488 00:10:58.973 } 00:10:58.973 ] 00:10:58.973 }' 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.973 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.232 [2024-11-20 09:23:24.651885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.232 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.492 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.492 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.492 "name": "Existed_Raid", 00:10:59.492 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:10:59.492 "strip_size_kb": 64, 00:10:59.492 "state": "configuring", 00:10:59.492 "raid_level": "raid0", 00:10:59.492 "superblock": true, 00:10:59.492 "num_base_bdevs": 4, 00:10:59.492 "num_base_bdevs_discovered": 2, 00:10:59.492 "num_base_bdevs_operational": 4, 00:10:59.492 "base_bdevs_list": [ 00:10:59.492 { 00:10:59.492 "name": "BaseBdev1", 00:10:59.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.492 "is_configured": false, 00:10:59.492 "data_offset": 0, 00:10:59.492 "data_size": 0 00:10:59.492 }, 00:10:59.492 { 00:10:59.492 "name": null, 00:10:59.492 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:10:59.492 "is_configured": false, 00:10:59.492 "data_offset": 0, 00:10:59.492 "data_size": 63488 00:10:59.492 }, 00:10:59.492 { 00:10:59.492 "name": "BaseBdev3", 00:10:59.492 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:10:59.492 "is_configured": true, 00:10:59.492 "data_offset": 2048, 00:10:59.492 "data_size": 63488 00:10:59.492 }, 00:10:59.492 { 00:10:59.492 "name": "BaseBdev4", 00:10:59.492 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:10:59.492 "is_configured": true, 00:10:59.492 "data_offset": 2048, 00:10:59.492 "data_size": 63488 00:10:59.492 } 00:10:59.492 ] 00:10:59.492 }' 00:10:59.492 09:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.492 09:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.752 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.752 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.752 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.752 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.752 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.752 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:59.752 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.752 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.752 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.011 [2024-11-20 09:23:25.225548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.011 BaseBdev1 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.011 [ 00:11:00.011 { 00:11:00.011 "name": "BaseBdev1", 00:11:00.011 "aliases": [ 00:11:00.011 "80c9b49e-521a-4415-9b33-1d2cfd579101" 00:11:00.011 ], 00:11:00.011 "product_name": "Malloc disk", 00:11:00.011 "block_size": 512, 00:11:00.011 "num_blocks": 65536, 00:11:00.011 "uuid": "80c9b49e-521a-4415-9b33-1d2cfd579101", 00:11:00.011 "assigned_rate_limits": { 00:11:00.011 "rw_ios_per_sec": 0, 00:11:00.011 "rw_mbytes_per_sec": 0, 00:11:00.011 "r_mbytes_per_sec": 0, 00:11:00.011 "w_mbytes_per_sec": 0 00:11:00.011 }, 00:11:00.011 "claimed": true, 00:11:00.011 "claim_type": "exclusive_write", 00:11:00.011 "zoned": false, 00:11:00.011 "supported_io_types": { 00:11:00.011 "read": true, 00:11:00.011 "write": true, 00:11:00.011 "unmap": true, 00:11:00.011 "flush": true, 00:11:00.011 "reset": true, 00:11:00.011 "nvme_admin": false, 00:11:00.011 "nvme_io": false, 00:11:00.011 "nvme_io_md": false, 00:11:00.011 "write_zeroes": true, 00:11:00.011 "zcopy": true, 00:11:00.011 "get_zone_info": false, 00:11:00.011 "zone_management": false, 00:11:00.011 "zone_append": false, 00:11:00.011 "compare": false, 00:11:00.011 "compare_and_write": false, 00:11:00.011 "abort": true, 00:11:00.011 "seek_hole": false, 00:11:00.011 "seek_data": false, 00:11:00.011 "copy": true, 00:11:00.011 "nvme_iov_md": false 00:11:00.011 }, 00:11:00.011 "memory_domains": [ 00:11:00.011 { 00:11:00.011 "dma_device_id": "system", 00:11:00.011 "dma_device_type": 1 00:11:00.011 }, 00:11:00.011 { 00:11:00.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.011 "dma_device_type": 2 00:11:00.011 } 00:11:00.011 ], 00:11:00.011 "driver_specific": {} 00:11:00.011 } 00:11:00.011 ] 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.011 "name": "Existed_Raid", 00:11:00.011 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:11:00.011 "strip_size_kb": 64, 00:11:00.011 "state": "configuring", 00:11:00.011 "raid_level": "raid0", 00:11:00.011 "superblock": true, 00:11:00.011 "num_base_bdevs": 4, 00:11:00.011 "num_base_bdevs_discovered": 3, 00:11:00.011 "num_base_bdevs_operational": 4, 00:11:00.011 "base_bdevs_list": [ 00:11:00.011 { 00:11:00.011 "name": "BaseBdev1", 00:11:00.011 "uuid": "80c9b49e-521a-4415-9b33-1d2cfd579101", 00:11:00.011 "is_configured": true, 00:11:00.011 "data_offset": 2048, 00:11:00.011 "data_size": 63488 00:11:00.011 }, 00:11:00.011 { 00:11:00.011 "name": null, 00:11:00.011 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:11:00.011 "is_configured": false, 00:11:00.011 "data_offset": 0, 00:11:00.011 "data_size": 63488 00:11:00.011 }, 00:11:00.011 { 00:11:00.011 "name": "BaseBdev3", 00:11:00.011 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:11:00.011 "is_configured": true, 00:11:00.011 "data_offset": 2048, 00:11:00.011 "data_size": 63488 00:11:00.011 }, 00:11:00.011 { 00:11:00.011 "name": "BaseBdev4", 00:11:00.011 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:11:00.011 "is_configured": true, 00:11:00.011 "data_offset": 2048, 00:11:00.011 "data_size": 63488 00:11:00.011 } 00:11:00.011 ] 00:11:00.011 }' 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.011 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.580 [2024-11-20 09:23:25.816625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.580 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.581 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.581 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.581 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.581 "name": "Existed_Raid", 00:11:00.581 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:11:00.581 "strip_size_kb": 64, 00:11:00.581 "state": "configuring", 00:11:00.581 "raid_level": "raid0", 00:11:00.581 "superblock": true, 00:11:00.581 "num_base_bdevs": 4, 00:11:00.581 "num_base_bdevs_discovered": 2, 00:11:00.581 "num_base_bdevs_operational": 4, 00:11:00.581 "base_bdevs_list": [ 00:11:00.581 { 00:11:00.581 "name": "BaseBdev1", 00:11:00.581 "uuid": "80c9b49e-521a-4415-9b33-1d2cfd579101", 00:11:00.581 "is_configured": true, 00:11:00.581 "data_offset": 2048, 00:11:00.581 "data_size": 63488 00:11:00.581 }, 00:11:00.581 { 00:11:00.581 "name": null, 00:11:00.581 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:11:00.581 "is_configured": false, 00:11:00.581 "data_offset": 0, 00:11:00.581 "data_size": 63488 00:11:00.581 }, 00:11:00.581 { 00:11:00.581 "name": null, 00:11:00.581 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:11:00.581 "is_configured": false, 00:11:00.581 "data_offset": 0, 00:11:00.581 "data_size": 63488 00:11:00.581 }, 00:11:00.581 { 00:11:00.581 "name": "BaseBdev4", 00:11:00.581 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:11:00.581 "is_configured": true, 00:11:00.581 "data_offset": 2048, 00:11:00.581 "data_size": 63488 00:11:00.581 } 00:11:00.581 ] 00:11:00.581 }' 00:11:00.581 09:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.581 09:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.846 [2024-11-20 09:23:26.247961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.846 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.107 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.107 "name": "Existed_Raid", 00:11:01.107 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:11:01.107 "strip_size_kb": 64, 00:11:01.107 "state": "configuring", 00:11:01.107 "raid_level": "raid0", 00:11:01.107 "superblock": true, 00:11:01.107 "num_base_bdevs": 4, 00:11:01.107 "num_base_bdevs_discovered": 3, 00:11:01.107 "num_base_bdevs_operational": 4, 00:11:01.107 "base_bdevs_list": [ 00:11:01.107 { 00:11:01.107 "name": "BaseBdev1", 00:11:01.107 "uuid": "80c9b49e-521a-4415-9b33-1d2cfd579101", 00:11:01.107 "is_configured": true, 00:11:01.107 "data_offset": 2048, 00:11:01.107 "data_size": 63488 00:11:01.107 }, 00:11:01.107 { 00:11:01.107 "name": null, 00:11:01.107 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:11:01.107 "is_configured": false, 00:11:01.107 "data_offset": 0, 00:11:01.107 "data_size": 63488 00:11:01.107 }, 00:11:01.107 { 00:11:01.107 "name": "BaseBdev3", 00:11:01.107 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:11:01.107 "is_configured": true, 00:11:01.107 "data_offset": 2048, 00:11:01.107 "data_size": 63488 00:11:01.107 }, 00:11:01.107 { 00:11:01.107 "name": "BaseBdev4", 00:11:01.107 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:11:01.107 "is_configured": true, 00:11:01.107 "data_offset": 2048, 00:11:01.107 "data_size": 63488 00:11:01.107 } 00:11:01.107 ] 00:11:01.107 }' 00:11:01.107 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.107 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.367 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.367 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.367 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.367 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.367 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.367 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:01.367 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.367 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.367 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.367 [2024-11-20 09:23:26.787163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.626 "name": "Existed_Raid", 00:11:01.626 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:11:01.626 "strip_size_kb": 64, 00:11:01.626 "state": "configuring", 00:11:01.626 "raid_level": "raid0", 00:11:01.626 "superblock": true, 00:11:01.626 "num_base_bdevs": 4, 00:11:01.626 "num_base_bdevs_discovered": 2, 00:11:01.626 "num_base_bdevs_operational": 4, 00:11:01.626 "base_bdevs_list": [ 00:11:01.626 { 00:11:01.626 "name": null, 00:11:01.626 "uuid": "80c9b49e-521a-4415-9b33-1d2cfd579101", 00:11:01.626 "is_configured": false, 00:11:01.626 "data_offset": 0, 00:11:01.626 "data_size": 63488 00:11:01.626 }, 00:11:01.626 { 00:11:01.626 "name": null, 00:11:01.626 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:11:01.626 "is_configured": false, 00:11:01.626 "data_offset": 0, 00:11:01.626 "data_size": 63488 00:11:01.626 }, 00:11:01.626 { 00:11:01.626 "name": "BaseBdev3", 00:11:01.626 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:11:01.626 "is_configured": true, 00:11:01.626 "data_offset": 2048, 00:11:01.626 "data_size": 63488 00:11:01.626 }, 00:11:01.626 { 00:11:01.626 "name": "BaseBdev4", 00:11:01.626 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:11:01.626 "is_configured": true, 00:11:01.626 "data_offset": 2048, 00:11:01.626 "data_size": 63488 00:11:01.626 } 00:11:01.626 ] 00:11:01.626 }' 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.626 09:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.193 [2024-11-20 09:23:27.444378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.193 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.194 "name": "Existed_Raid", 00:11:02.194 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:11:02.194 "strip_size_kb": 64, 00:11:02.194 "state": "configuring", 00:11:02.194 "raid_level": "raid0", 00:11:02.194 "superblock": true, 00:11:02.194 "num_base_bdevs": 4, 00:11:02.194 "num_base_bdevs_discovered": 3, 00:11:02.194 "num_base_bdevs_operational": 4, 00:11:02.194 "base_bdevs_list": [ 00:11:02.194 { 00:11:02.194 "name": null, 00:11:02.194 "uuid": "80c9b49e-521a-4415-9b33-1d2cfd579101", 00:11:02.194 "is_configured": false, 00:11:02.194 "data_offset": 0, 00:11:02.194 "data_size": 63488 00:11:02.194 }, 00:11:02.194 { 00:11:02.194 "name": "BaseBdev2", 00:11:02.194 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:11:02.194 "is_configured": true, 00:11:02.194 "data_offset": 2048, 00:11:02.194 "data_size": 63488 00:11:02.194 }, 00:11:02.194 { 00:11:02.194 "name": "BaseBdev3", 00:11:02.194 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:11:02.194 "is_configured": true, 00:11:02.194 "data_offset": 2048, 00:11:02.194 "data_size": 63488 00:11:02.194 }, 00:11:02.194 { 00:11:02.194 "name": "BaseBdev4", 00:11:02.194 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:11:02.194 "is_configured": true, 00:11:02.194 "data_offset": 2048, 00:11:02.194 "data_size": 63488 00:11:02.194 } 00:11:02.194 ] 00:11:02.194 }' 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.194 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.451 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.451 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.451 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.452 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 80c9b49e-521a-4415-9b33-1d2cfd579101 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.710 09:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.710 [2024-11-20 09:23:28.040890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:02.710 [2024-11-20 09:23:28.041139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:02.710 [2024-11-20 09:23:28.041153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:02.710 [2024-11-20 09:23:28.041479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:02.710 [2024-11-20 09:23:28.041665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:02.710 [2024-11-20 09:23:28.041697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:02.710 [2024-11-20 09:23:28.041845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.710 NewBaseBdev 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.710 [ 00:11:02.710 { 00:11:02.710 "name": "NewBaseBdev", 00:11:02.710 "aliases": [ 00:11:02.710 "80c9b49e-521a-4415-9b33-1d2cfd579101" 00:11:02.710 ], 00:11:02.710 "product_name": "Malloc disk", 00:11:02.710 "block_size": 512, 00:11:02.710 "num_blocks": 65536, 00:11:02.710 "uuid": "80c9b49e-521a-4415-9b33-1d2cfd579101", 00:11:02.710 "assigned_rate_limits": { 00:11:02.710 "rw_ios_per_sec": 0, 00:11:02.710 "rw_mbytes_per_sec": 0, 00:11:02.710 "r_mbytes_per_sec": 0, 00:11:02.710 "w_mbytes_per_sec": 0 00:11:02.710 }, 00:11:02.710 "claimed": true, 00:11:02.710 "claim_type": "exclusive_write", 00:11:02.710 "zoned": false, 00:11:02.710 "supported_io_types": { 00:11:02.710 "read": true, 00:11:02.710 "write": true, 00:11:02.710 "unmap": true, 00:11:02.710 "flush": true, 00:11:02.710 "reset": true, 00:11:02.710 "nvme_admin": false, 00:11:02.710 "nvme_io": false, 00:11:02.710 "nvme_io_md": false, 00:11:02.710 "write_zeroes": true, 00:11:02.710 "zcopy": true, 00:11:02.710 "get_zone_info": false, 00:11:02.710 "zone_management": false, 00:11:02.710 "zone_append": false, 00:11:02.710 "compare": false, 00:11:02.710 "compare_and_write": false, 00:11:02.710 "abort": true, 00:11:02.710 "seek_hole": false, 00:11:02.710 "seek_data": false, 00:11:02.710 "copy": true, 00:11:02.710 "nvme_iov_md": false 00:11:02.710 }, 00:11:02.710 "memory_domains": [ 00:11:02.710 { 00:11:02.710 "dma_device_id": "system", 00:11:02.710 "dma_device_type": 1 00:11:02.710 }, 00:11:02.710 { 00:11:02.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.710 "dma_device_type": 2 00:11:02.710 } 00:11:02.710 ], 00:11:02.710 "driver_specific": {} 00:11:02.710 } 00:11:02.710 ] 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.710 "name": "Existed_Raid", 00:11:02.710 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:11:02.710 "strip_size_kb": 64, 00:11:02.710 "state": "online", 00:11:02.710 "raid_level": "raid0", 00:11:02.710 "superblock": true, 00:11:02.710 "num_base_bdevs": 4, 00:11:02.710 "num_base_bdevs_discovered": 4, 00:11:02.710 "num_base_bdevs_operational": 4, 00:11:02.710 "base_bdevs_list": [ 00:11:02.710 { 00:11:02.710 "name": "NewBaseBdev", 00:11:02.710 "uuid": "80c9b49e-521a-4415-9b33-1d2cfd579101", 00:11:02.710 "is_configured": true, 00:11:02.710 "data_offset": 2048, 00:11:02.710 "data_size": 63488 00:11:02.710 }, 00:11:02.710 { 00:11:02.710 "name": "BaseBdev2", 00:11:02.710 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:11:02.710 "is_configured": true, 00:11:02.710 "data_offset": 2048, 00:11:02.710 "data_size": 63488 00:11:02.710 }, 00:11:02.710 { 00:11:02.710 "name": "BaseBdev3", 00:11:02.710 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:11:02.710 "is_configured": true, 00:11:02.710 "data_offset": 2048, 00:11:02.710 "data_size": 63488 00:11:02.710 }, 00:11:02.710 { 00:11:02.710 "name": "BaseBdev4", 00:11:02.710 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:11:02.710 "is_configured": true, 00:11:02.710 "data_offset": 2048, 00:11:02.710 "data_size": 63488 00:11:02.710 } 00:11:02.710 ] 00:11:02.710 }' 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.710 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.278 [2024-11-20 09:23:28.540574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.278 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.278 "name": "Existed_Raid", 00:11:03.278 "aliases": [ 00:11:03.278 "d63e67e0-008c-4032-9c00-bdb41792f8e2" 00:11:03.278 ], 00:11:03.278 "product_name": "Raid Volume", 00:11:03.278 "block_size": 512, 00:11:03.278 "num_blocks": 253952, 00:11:03.278 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:11:03.278 "assigned_rate_limits": { 00:11:03.278 "rw_ios_per_sec": 0, 00:11:03.278 "rw_mbytes_per_sec": 0, 00:11:03.278 "r_mbytes_per_sec": 0, 00:11:03.278 "w_mbytes_per_sec": 0 00:11:03.278 }, 00:11:03.278 "claimed": false, 00:11:03.278 "zoned": false, 00:11:03.278 "supported_io_types": { 00:11:03.278 "read": true, 00:11:03.278 "write": true, 00:11:03.278 "unmap": true, 00:11:03.278 "flush": true, 00:11:03.278 "reset": true, 00:11:03.278 "nvme_admin": false, 00:11:03.278 "nvme_io": false, 00:11:03.278 "nvme_io_md": false, 00:11:03.278 "write_zeroes": true, 00:11:03.278 "zcopy": false, 00:11:03.278 "get_zone_info": false, 00:11:03.278 "zone_management": false, 00:11:03.278 "zone_append": false, 00:11:03.278 "compare": false, 00:11:03.278 "compare_and_write": false, 00:11:03.278 "abort": false, 00:11:03.278 "seek_hole": false, 00:11:03.278 "seek_data": false, 00:11:03.278 "copy": false, 00:11:03.278 "nvme_iov_md": false 00:11:03.278 }, 00:11:03.278 "memory_domains": [ 00:11:03.278 { 00:11:03.278 "dma_device_id": "system", 00:11:03.278 "dma_device_type": 1 00:11:03.278 }, 00:11:03.278 { 00:11:03.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.278 "dma_device_type": 2 00:11:03.278 }, 00:11:03.278 { 00:11:03.279 "dma_device_id": "system", 00:11:03.279 "dma_device_type": 1 00:11:03.279 }, 00:11:03.279 { 00:11:03.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.279 "dma_device_type": 2 00:11:03.279 }, 00:11:03.279 { 00:11:03.279 "dma_device_id": "system", 00:11:03.279 "dma_device_type": 1 00:11:03.279 }, 00:11:03.279 { 00:11:03.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.279 "dma_device_type": 2 00:11:03.279 }, 00:11:03.279 { 00:11:03.279 "dma_device_id": "system", 00:11:03.279 "dma_device_type": 1 00:11:03.279 }, 00:11:03.279 { 00:11:03.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.279 "dma_device_type": 2 00:11:03.279 } 00:11:03.279 ], 00:11:03.279 "driver_specific": { 00:11:03.279 "raid": { 00:11:03.279 "uuid": "d63e67e0-008c-4032-9c00-bdb41792f8e2", 00:11:03.279 "strip_size_kb": 64, 00:11:03.279 "state": "online", 00:11:03.279 "raid_level": "raid0", 00:11:03.279 "superblock": true, 00:11:03.279 "num_base_bdevs": 4, 00:11:03.279 "num_base_bdevs_discovered": 4, 00:11:03.279 "num_base_bdevs_operational": 4, 00:11:03.279 "base_bdevs_list": [ 00:11:03.279 { 00:11:03.279 "name": "NewBaseBdev", 00:11:03.279 "uuid": "80c9b49e-521a-4415-9b33-1d2cfd579101", 00:11:03.279 "is_configured": true, 00:11:03.279 "data_offset": 2048, 00:11:03.279 "data_size": 63488 00:11:03.279 }, 00:11:03.279 { 00:11:03.279 "name": "BaseBdev2", 00:11:03.279 "uuid": "f0e85caf-a6fe-47a8-9874-bf59f98846a7", 00:11:03.279 "is_configured": true, 00:11:03.279 "data_offset": 2048, 00:11:03.279 "data_size": 63488 00:11:03.279 }, 00:11:03.279 { 00:11:03.279 "name": "BaseBdev3", 00:11:03.279 "uuid": "b04cf066-1315-41f4-b14e-7e04173237c3", 00:11:03.279 "is_configured": true, 00:11:03.279 "data_offset": 2048, 00:11:03.279 "data_size": 63488 00:11:03.279 }, 00:11:03.279 { 00:11:03.279 "name": "BaseBdev4", 00:11:03.279 "uuid": "67f9cdda-a68c-49a1-8d83-9f6cf19026d1", 00:11:03.279 "is_configured": true, 00:11:03.279 "data_offset": 2048, 00:11:03.279 "data_size": 63488 00:11:03.279 } 00:11:03.279 ] 00:11:03.279 } 00:11:03.279 } 00:11:03.279 }' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:03.279 BaseBdev2 00:11:03.279 BaseBdev3 00:11:03.279 BaseBdev4' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.279 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.537 [2024-11-20 09:23:28.807728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.537 [2024-11-20 09:23:28.807769] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.537 [2024-11-20 09:23:28.807858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.537 [2024-11-20 09:23:28.807935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.537 [2024-11-20 09:23:28.807947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70383 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70383 ']' 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70383 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70383 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.537 killing process with pid 70383 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70383' 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70383 00:11:03.537 [2024-11-20 09:23:28.854573] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.537 09:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70383 00:11:04.104 [2024-11-20 09:23:29.292158] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.475 09:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:05.475 00:11:05.475 real 0m12.231s 00:11:05.475 user 0m19.422s 00:11:05.475 sys 0m2.080s 00:11:05.475 09:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.475 ************************************ 00:11:05.475 09:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.475 END TEST raid_state_function_test_sb 00:11:05.475 ************************************ 00:11:05.475 09:23:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:05.475 09:23:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:05.475 09:23:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.475 09:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.475 ************************************ 00:11:05.475 START TEST raid_superblock_test 00:11:05.475 ************************************ 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:05.475 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71061 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71061 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71061 ']' 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.476 09:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.476 [2024-11-20 09:23:30.674662] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:11:05.476 [2024-11-20 09:23:30.674796] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71061 ] 00:11:05.476 [2024-11-20 09:23:30.852129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.741 [2024-11-20 09:23:30.986277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.004 [2024-11-20 09:23:31.225347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.004 [2024-11-20 09:23:31.225438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.264 malloc1 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.264 [2024-11-20 09:23:31.695194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:06.264 [2024-11-20 09:23:31.695269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.264 [2024-11-20 09:23:31.695298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:06.264 [2024-11-20 09:23:31.695308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.264 [2024-11-20 09:23:31.697791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.264 [2024-11-20 09:23:31.697833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:06.264 pt1 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.264 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.524 malloc2 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.524 [2024-11-20 09:23:31.756468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.524 [2024-11-20 09:23:31.756528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.524 [2024-11-20 09:23:31.756551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:06.524 [2024-11-20 09:23:31.756560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.524 [2024-11-20 09:23:31.758818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.524 [2024-11-20 09:23:31.758852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.524 pt2 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.524 malloc3 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.524 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.524 [2024-11-20 09:23:31.826138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.524 [2024-11-20 09:23:31.826202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.524 [2024-11-20 09:23:31.826226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:06.524 [2024-11-20 09:23:31.826236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.524 [2024-11-20 09:23:31.828746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.525 [2024-11-20 09:23:31.828789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.525 pt3 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.525 malloc4 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.525 [2024-11-20 09:23:31.892029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:06.525 [2024-11-20 09:23:31.892095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.525 [2024-11-20 09:23:31.892118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:06.525 [2024-11-20 09:23:31.892129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.525 [2024-11-20 09:23:31.894589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.525 [2024-11-20 09:23:31.894626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:06.525 pt4 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.525 [2024-11-20 09:23:31.904047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:06.525 [2024-11-20 09:23:31.906138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.525 [2024-11-20 09:23:31.906219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.525 [2024-11-20 09:23:31.906289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:06.525 [2024-11-20 09:23:31.906546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:06.525 [2024-11-20 09:23:31.906569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:06.525 [2024-11-20 09:23:31.906880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:06.525 [2024-11-20 09:23:31.907088] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:06.525 [2024-11-20 09:23:31.907112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:06.525 [2024-11-20 09:23:31.907304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.525 "name": "raid_bdev1", 00:11:06.525 "uuid": "6a133c73-9824-4783-a0e5-68b96ee4f0c3", 00:11:06.525 "strip_size_kb": 64, 00:11:06.525 "state": "online", 00:11:06.525 "raid_level": "raid0", 00:11:06.525 "superblock": true, 00:11:06.525 "num_base_bdevs": 4, 00:11:06.525 "num_base_bdevs_discovered": 4, 00:11:06.525 "num_base_bdevs_operational": 4, 00:11:06.525 "base_bdevs_list": [ 00:11:06.525 { 00:11:06.525 "name": "pt1", 00:11:06.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.525 "is_configured": true, 00:11:06.525 "data_offset": 2048, 00:11:06.525 "data_size": 63488 00:11:06.525 }, 00:11:06.525 { 00:11:06.525 "name": "pt2", 00:11:06.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.525 "is_configured": true, 00:11:06.525 "data_offset": 2048, 00:11:06.525 "data_size": 63488 00:11:06.525 }, 00:11:06.525 { 00:11:06.525 "name": "pt3", 00:11:06.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.525 "is_configured": true, 00:11:06.525 "data_offset": 2048, 00:11:06.525 "data_size": 63488 00:11:06.525 }, 00:11:06.525 { 00:11:06.525 "name": "pt4", 00:11:06.525 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.525 "is_configured": true, 00:11:06.525 "data_offset": 2048, 00:11:06.525 "data_size": 63488 00:11:06.525 } 00:11:06.525 ] 00:11:06.525 }' 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.525 09:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.092 [2024-11-20 09:23:32.355815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.092 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.092 "name": "raid_bdev1", 00:11:07.092 "aliases": [ 00:11:07.092 "6a133c73-9824-4783-a0e5-68b96ee4f0c3" 00:11:07.092 ], 00:11:07.092 "product_name": "Raid Volume", 00:11:07.092 "block_size": 512, 00:11:07.092 "num_blocks": 253952, 00:11:07.092 "uuid": "6a133c73-9824-4783-a0e5-68b96ee4f0c3", 00:11:07.092 "assigned_rate_limits": { 00:11:07.092 "rw_ios_per_sec": 0, 00:11:07.092 "rw_mbytes_per_sec": 0, 00:11:07.092 "r_mbytes_per_sec": 0, 00:11:07.092 "w_mbytes_per_sec": 0 00:11:07.092 }, 00:11:07.092 "claimed": false, 00:11:07.092 "zoned": false, 00:11:07.092 "supported_io_types": { 00:11:07.092 "read": true, 00:11:07.092 "write": true, 00:11:07.092 "unmap": true, 00:11:07.092 "flush": true, 00:11:07.092 "reset": true, 00:11:07.092 "nvme_admin": false, 00:11:07.092 "nvme_io": false, 00:11:07.092 "nvme_io_md": false, 00:11:07.092 "write_zeroes": true, 00:11:07.092 "zcopy": false, 00:11:07.092 "get_zone_info": false, 00:11:07.092 "zone_management": false, 00:11:07.092 "zone_append": false, 00:11:07.092 "compare": false, 00:11:07.092 "compare_and_write": false, 00:11:07.092 "abort": false, 00:11:07.092 "seek_hole": false, 00:11:07.092 "seek_data": false, 00:11:07.092 "copy": false, 00:11:07.092 "nvme_iov_md": false 00:11:07.092 }, 00:11:07.092 "memory_domains": [ 00:11:07.092 { 00:11:07.092 "dma_device_id": "system", 00:11:07.092 "dma_device_type": 1 00:11:07.092 }, 00:11:07.092 { 00:11:07.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.092 "dma_device_type": 2 00:11:07.092 }, 00:11:07.092 { 00:11:07.092 "dma_device_id": "system", 00:11:07.092 "dma_device_type": 1 00:11:07.092 }, 00:11:07.092 { 00:11:07.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.092 "dma_device_type": 2 00:11:07.092 }, 00:11:07.092 { 00:11:07.092 "dma_device_id": "system", 00:11:07.092 "dma_device_type": 1 00:11:07.092 }, 00:11:07.092 { 00:11:07.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.092 "dma_device_type": 2 00:11:07.092 }, 00:11:07.092 { 00:11:07.092 "dma_device_id": "system", 00:11:07.092 "dma_device_type": 1 00:11:07.092 }, 00:11:07.092 { 00:11:07.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.092 "dma_device_type": 2 00:11:07.092 } 00:11:07.092 ], 00:11:07.092 "driver_specific": { 00:11:07.092 "raid": { 00:11:07.092 "uuid": "6a133c73-9824-4783-a0e5-68b96ee4f0c3", 00:11:07.092 "strip_size_kb": 64, 00:11:07.093 "state": "online", 00:11:07.093 "raid_level": "raid0", 00:11:07.093 "superblock": true, 00:11:07.093 "num_base_bdevs": 4, 00:11:07.093 "num_base_bdevs_discovered": 4, 00:11:07.093 "num_base_bdevs_operational": 4, 00:11:07.093 "base_bdevs_list": [ 00:11:07.093 { 00:11:07.093 "name": "pt1", 00:11:07.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.093 "is_configured": true, 00:11:07.093 "data_offset": 2048, 00:11:07.093 "data_size": 63488 00:11:07.093 }, 00:11:07.093 { 00:11:07.093 "name": "pt2", 00:11:07.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.093 "is_configured": true, 00:11:07.093 "data_offset": 2048, 00:11:07.093 "data_size": 63488 00:11:07.093 }, 00:11:07.093 { 00:11:07.093 "name": "pt3", 00:11:07.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.093 "is_configured": true, 00:11:07.093 "data_offset": 2048, 00:11:07.093 "data_size": 63488 00:11:07.093 }, 00:11:07.093 { 00:11:07.093 "name": "pt4", 00:11:07.093 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.093 "is_configured": true, 00:11:07.093 "data_offset": 2048, 00:11:07.093 "data_size": 63488 00:11:07.093 } 00:11:07.093 ] 00:11:07.093 } 00:11:07.093 } 00:11:07.093 }' 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:07.093 pt2 00:11:07.093 pt3 00:11:07.093 pt4' 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.093 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.351 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.352 [2024-11-20 09:23:32.699205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6a133c73-9824-4783-a0e5-68b96ee4f0c3 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6a133c73-9824-4783-a0e5-68b96ee4f0c3 ']' 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.352 [2024-11-20 09:23:32.742743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.352 [2024-11-20 09:23:32.742835] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.352 [2024-11-20 09:23:32.742974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.352 [2024-11-20 09:23:32.743088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.352 [2024-11-20 09:23:32.743148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.352 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.612 [2024-11-20 09:23:32.890684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:07.612 [2024-11-20 09:23:32.892975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:07.612 [2024-11-20 09:23:32.893042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:07.612 [2024-11-20 09:23:32.893082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:07.612 [2024-11-20 09:23:32.893151] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:07.612 [2024-11-20 09:23:32.893215] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:07.612 [2024-11-20 09:23:32.893237] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:07.612 [2024-11-20 09:23:32.893260] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:07.612 [2024-11-20 09:23:32.893276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.612 [2024-11-20 09:23:32.893292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:07.612 request: 00:11:07.612 { 00:11:07.612 "name": "raid_bdev1", 00:11:07.612 "raid_level": "raid0", 00:11:07.612 "base_bdevs": [ 00:11:07.612 "malloc1", 00:11:07.612 "malloc2", 00:11:07.612 "malloc3", 00:11:07.612 "malloc4" 00:11:07.612 ], 00:11:07.612 "strip_size_kb": 64, 00:11:07.612 "superblock": false, 00:11:07.612 "method": "bdev_raid_create", 00:11:07.612 "req_id": 1 00:11:07.612 } 00:11:07.612 Got JSON-RPC error response 00:11:07.612 response: 00:11:07.612 { 00:11:07.612 "code": -17, 00:11:07.612 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:07.612 } 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.612 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.612 [2024-11-20 09:23:32.950633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.612 [2024-11-20 09:23:32.950724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.612 [2024-11-20 09:23:32.950746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:07.612 [2024-11-20 09:23:32.950760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.613 [2024-11-20 09:23:32.953405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.613 [2024-11-20 09:23:32.953482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.613 [2024-11-20 09:23:32.953601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:07.613 [2024-11-20 09:23:32.953690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.613 pt1 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.613 "name": "raid_bdev1", 00:11:07.613 "uuid": "6a133c73-9824-4783-a0e5-68b96ee4f0c3", 00:11:07.613 "strip_size_kb": 64, 00:11:07.613 "state": "configuring", 00:11:07.613 "raid_level": "raid0", 00:11:07.613 "superblock": true, 00:11:07.613 "num_base_bdevs": 4, 00:11:07.613 "num_base_bdevs_discovered": 1, 00:11:07.613 "num_base_bdevs_operational": 4, 00:11:07.613 "base_bdevs_list": [ 00:11:07.613 { 00:11:07.613 "name": "pt1", 00:11:07.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.613 "is_configured": true, 00:11:07.613 "data_offset": 2048, 00:11:07.613 "data_size": 63488 00:11:07.613 }, 00:11:07.613 { 00:11:07.613 "name": null, 00:11:07.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.613 "is_configured": false, 00:11:07.613 "data_offset": 2048, 00:11:07.613 "data_size": 63488 00:11:07.613 }, 00:11:07.613 { 00:11:07.613 "name": null, 00:11:07.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.613 "is_configured": false, 00:11:07.613 "data_offset": 2048, 00:11:07.613 "data_size": 63488 00:11:07.613 }, 00:11:07.613 { 00:11:07.613 "name": null, 00:11:07.613 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.613 "is_configured": false, 00:11:07.613 "data_offset": 2048, 00:11:07.613 "data_size": 63488 00:11:07.613 } 00:11:07.613 ] 00:11:07.613 }' 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.613 09:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.183 [2024-11-20 09:23:33.421911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.183 [2024-11-20 09:23:33.422081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.183 [2024-11-20 09:23:33.422134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:08.183 [2024-11-20 09:23:33.422178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.183 [2024-11-20 09:23:33.422737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.183 [2024-11-20 09:23:33.422808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.183 [2024-11-20 09:23:33.422939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.183 [2024-11-20 09:23:33.423002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.183 pt2 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.183 [2024-11-20 09:23:33.433947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.183 "name": "raid_bdev1", 00:11:08.183 "uuid": "6a133c73-9824-4783-a0e5-68b96ee4f0c3", 00:11:08.183 "strip_size_kb": 64, 00:11:08.183 "state": "configuring", 00:11:08.183 "raid_level": "raid0", 00:11:08.183 "superblock": true, 00:11:08.183 "num_base_bdevs": 4, 00:11:08.183 "num_base_bdevs_discovered": 1, 00:11:08.183 "num_base_bdevs_operational": 4, 00:11:08.183 "base_bdevs_list": [ 00:11:08.183 { 00:11:08.183 "name": "pt1", 00:11:08.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.183 "is_configured": true, 00:11:08.183 "data_offset": 2048, 00:11:08.183 "data_size": 63488 00:11:08.183 }, 00:11:08.183 { 00:11:08.183 "name": null, 00:11:08.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.183 "is_configured": false, 00:11:08.183 "data_offset": 0, 00:11:08.183 "data_size": 63488 00:11:08.183 }, 00:11:08.183 { 00:11:08.183 "name": null, 00:11:08.183 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.183 "is_configured": false, 00:11:08.183 "data_offset": 2048, 00:11:08.183 "data_size": 63488 00:11:08.183 }, 00:11:08.183 { 00:11:08.183 "name": null, 00:11:08.183 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.183 "is_configured": false, 00:11:08.183 "data_offset": 2048, 00:11:08.183 "data_size": 63488 00:11:08.183 } 00:11:08.183 ] 00:11:08.183 }' 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.183 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.752 [2024-11-20 09:23:33.949098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.752 [2024-11-20 09:23:33.949250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.752 [2024-11-20 09:23:33.949298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:08.752 [2024-11-20 09:23:33.949333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.752 [2024-11-20 09:23:33.949896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.752 [2024-11-20 09:23:33.949966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.752 [2024-11-20 09:23:33.950092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.752 [2024-11-20 09:23:33.950151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.752 pt2 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.752 [2024-11-20 09:23:33.961042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:08.752 [2024-11-20 09:23:33.961158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.752 [2024-11-20 09:23:33.961207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:08.752 [2024-11-20 09:23:33.961248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.752 [2024-11-20 09:23:33.961751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.752 [2024-11-20 09:23:33.961821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:08.752 [2024-11-20 09:23:33.961933] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:08.752 [2024-11-20 09:23:33.961960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:08.752 pt3 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.752 [2024-11-20 09:23:33.972974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:08.752 [2024-11-20 09:23:33.973032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.752 [2024-11-20 09:23:33.973054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:08.752 [2024-11-20 09:23:33.973063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.752 [2024-11-20 09:23:33.973543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.752 [2024-11-20 09:23:33.973563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:08.752 [2024-11-20 09:23:33.973650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:08.752 [2024-11-20 09:23:33.973674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:08.752 [2024-11-20 09:23:33.973874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:08.752 [2024-11-20 09:23:33.973890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.752 [2024-11-20 09:23:33.974180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:08.752 [2024-11-20 09:23:33.974351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:08.752 [2024-11-20 09:23:33.974366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:08.752 [2024-11-20 09:23:33.974535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.752 pt4 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.752 09:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.752 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.752 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.752 "name": "raid_bdev1", 00:11:08.752 "uuid": "6a133c73-9824-4783-a0e5-68b96ee4f0c3", 00:11:08.752 "strip_size_kb": 64, 00:11:08.752 "state": "online", 00:11:08.753 "raid_level": "raid0", 00:11:08.753 "superblock": true, 00:11:08.753 "num_base_bdevs": 4, 00:11:08.753 "num_base_bdevs_discovered": 4, 00:11:08.753 "num_base_bdevs_operational": 4, 00:11:08.753 "base_bdevs_list": [ 00:11:08.753 { 00:11:08.753 "name": "pt1", 00:11:08.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.753 "is_configured": true, 00:11:08.753 "data_offset": 2048, 00:11:08.753 "data_size": 63488 00:11:08.753 }, 00:11:08.753 { 00:11:08.753 "name": "pt2", 00:11:08.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.753 "is_configured": true, 00:11:08.753 "data_offset": 2048, 00:11:08.753 "data_size": 63488 00:11:08.753 }, 00:11:08.753 { 00:11:08.753 "name": "pt3", 00:11:08.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.753 "is_configured": true, 00:11:08.753 "data_offset": 2048, 00:11:08.753 "data_size": 63488 00:11:08.753 }, 00:11:08.753 { 00:11:08.753 "name": "pt4", 00:11:08.753 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.753 "is_configured": true, 00:11:08.753 "data_offset": 2048, 00:11:08.753 "data_size": 63488 00:11:08.753 } 00:11:08.753 ] 00:11:08.753 }' 00:11:08.753 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.753 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.335 [2024-11-20 09:23:34.484621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.335 "name": "raid_bdev1", 00:11:09.335 "aliases": [ 00:11:09.335 "6a133c73-9824-4783-a0e5-68b96ee4f0c3" 00:11:09.335 ], 00:11:09.335 "product_name": "Raid Volume", 00:11:09.335 "block_size": 512, 00:11:09.335 "num_blocks": 253952, 00:11:09.335 "uuid": "6a133c73-9824-4783-a0e5-68b96ee4f0c3", 00:11:09.335 "assigned_rate_limits": { 00:11:09.335 "rw_ios_per_sec": 0, 00:11:09.335 "rw_mbytes_per_sec": 0, 00:11:09.335 "r_mbytes_per_sec": 0, 00:11:09.335 "w_mbytes_per_sec": 0 00:11:09.335 }, 00:11:09.335 "claimed": false, 00:11:09.335 "zoned": false, 00:11:09.335 "supported_io_types": { 00:11:09.335 "read": true, 00:11:09.335 "write": true, 00:11:09.335 "unmap": true, 00:11:09.335 "flush": true, 00:11:09.335 "reset": true, 00:11:09.335 "nvme_admin": false, 00:11:09.335 "nvme_io": false, 00:11:09.335 "nvme_io_md": false, 00:11:09.335 "write_zeroes": true, 00:11:09.335 "zcopy": false, 00:11:09.335 "get_zone_info": false, 00:11:09.335 "zone_management": false, 00:11:09.335 "zone_append": false, 00:11:09.335 "compare": false, 00:11:09.335 "compare_and_write": false, 00:11:09.335 "abort": false, 00:11:09.335 "seek_hole": false, 00:11:09.335 "seek_data": false, 00:11:09.335 "copy": false, 00:11:09.335 "nvme_iov_md": false 00:11:09.335 }, 00:11:09.335 "memory_domains": [ 00:11:09.335 { 00:11:09.335 "dma_device_id": "system", 00:11:09.335 "dma_device_type": 1 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.335 "dma_device_type": 2 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "dma_device_id": "system", 00:11:09.335 "dma_device_type": 1 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.335 "dma_device_type": 2 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "dma_device_id": "system", 00:11:09.335 "dma_device_type": 1 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.335 "dma_device_type": 2 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "dma_device_id": "system", 00:11:09.335 "dma_device_type": 1 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.335 "dma_device_type": 2 00:11:09.335 } 00:11:09.335 ], 00:11:09.335 "driver_specific": { 00:11:09.335 "raid": { 00:11:09.335 "uuid": "6a133c73-9824-4783-a0e5-68b96ee4f0c3", 00:11:09.335 "strip_size_kb": 64, 00:11:09.335 "state": "online", 00:11:09.335 "raid_level": "raid0", 00:11:09.335 "superblock": true, 00:11:09.335 "num_base_bdevs": 4, 00:11:09.335 "num_base_bdevs_discovered": 4, 00:11:09.335 "num_base_bdevs_operational": 4, 00:11:09.335 "base_bdevs_list": [ 00:11:09.335 { 00:11:09.335 "name": "pt1", 00:11:09.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.335 "is_configured": true, 00:11:09.335 "data_offset": 2048, 00:11:09.335 "data_size": 63488 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "name": "pt2", 00:11:09.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.335 "is_configured": true, 00:11:09.335 "data_offset": 2048, 00:11:09.335 "data_size": 63488 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "name": "pt3", 00:11:09.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.335 "is_configured": true, 00:11:09.335 "data_offset": 2048, 00:11:09.335 "data_size": 63488 00:11:09.335 }, 00:11:09.335 { 00:11:09.335 "name": "pt4", 00:11:09.335 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.335 "is_configured": true, 00:11:09.335 "data_offset": 2048, 00:11:09.335 "data_size": 63488 00:11:09.335 } 00:11:09.335 ] 00:11:09.335 } 00:11:09.335 } 00:11:09.335 }' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:09.335 pt2 00:11:09.335 pt3 00:11:09.335 pt4' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.335 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.336 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.336 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.336 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:09.595 [2024-11-20 09:23:34.792162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6a133c73-9824-4783-a0e5-68b96ee4f0c3 '!=' 6a133c73-9824-4783-a0e5-68b96ee4f0c3 ']' 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71061 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71061 ']' 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71061 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71061 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.595 killing process with pid 71061 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71061' 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71061 00:11:09.595 [2024-11-20 09:23:34.878031] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.595 [2024-11-20 09:23:34.878133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.595 09:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71061 00:11:09.595 [2024-11-20 09:23:34.878219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.595 [2024-11-20 09:23:34.878230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:10.163 [2024-11-20 09:23:35.359297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.539 09:23:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:11.539 00:11:11.539 real 0m6.098s 00:11:11.539 user 0m8.755s 00:11:11.539 sys 0m0.940s 00:11:11.539 09:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.539 ************************************ 00:11:11.539 END TEST raid_superblock_test 00:11:11.539 ************************************ 00:11:11.539 09:23:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.539 09:23:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:11.539 09:23:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.539 09:23:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.539 09:23:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.539 ************************************ 00:11:11.539 START TEST raid_read_error_test 00:11:11.539 ************************************ 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QMq7GkM2rb 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71327 00:11:11.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71327 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71327 ']' 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.539 09:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.539 [2024-11-20 09:23:36.870761] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:11:11.539 [2024-11-20 09:23:36.870996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71327 ] 00:11:11.799 [2024-11-20 09:23:37.050723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.799 [2024-11-20 09:23:37.207516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.058 [2024-11-20 09:23:37.445015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.058 [2024-11-20 09:23:37.445188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 BaseBdev1_malloc 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 true 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 [2024-11-20 09:23:37.845513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:12.628 [2024-11-20 09:23:37.845594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.628 [2024-11-20 09:23:37.845621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:12.628 [2024-11-20 09:23:37.845634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.628 [2024-11-20 09:23:37.848246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.628 [2024-11-20 09:23:37.848302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.628 BaseBdev1 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 BaseBdev2_malloc 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 true 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 [2024-11-20 09:23:37.917268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.628 [2024-11-20 09:23:37.917392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.628 [2024-11-20 09:23:37.917419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.628 [2024-11-20 09:23:37.917441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.628 [2024-11-20 09:23:37.919900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.628 [2024-11-20 09:23:37.919948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.628 BaseBdev2 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 BaseBdev3_malloc 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.628 true 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:12.628 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.629 09:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.629 [2024-11-20 09:23:37.998910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:12.629 [2024-11-20 09:23:37.998993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.629 [2024-11-20 09:23:37.999017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:12.629 [2024-11-20 09:23:37.999028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.629 [2024-11-20 09:23:38.001481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.629 [2024-11-20 09:23:38.001552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:12.629 BaseBdev3 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.629 BaseBdev4_malloc 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.629 true 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.629 [2024-11-20 09:23:38.070264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:12.629 [2024-11-20 09:23:38.070328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.629 [2024-11-20 09:23:38.070350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:12.629 [2024-11-20 09:23:38.070361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.629 [2024-11-20 09:23:38.072668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.629 [2024-11-20 09:23:38.072773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:12.629 BaseBdev4 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.629 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.889 [2024-11-20 09:23:38.082302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.889 [2024-11-20 09:23:38.084337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.889 [2024-11-20 09:23:38.084425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.889 [2024-11-20 09:23:38.084523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.889 [2024-11-20 09:23:38.084781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:12.889 [2024-11-20 09:23:38.084805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:12.889 [2024-11-20 09:23:38.085076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:12.889 [2024-11-20 09:23:38.085250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:12.889 [2024-11-20 09:23:38.085262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:12.889 [2024-11-20 09:23:38.085494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.889 "name": "raid_bdev1", 00:11:12.889 "uuid": "1fd508f3-ee96-4f2b-bce9-6a63d58dd77f", 00:11:12.889 "strip_size_kb": 64, 00:11:12.889 "state": "online", 00:11:12.889 "raid_level": "raid0", 00:11:12.889 "superblock": true, 00:11:12.889 "num_base_bdevs": 4, 00:11:12.889 "num_base_bdevs_discovered": 4, 00:11:12.889 "num_base_bdevs_operational": 4, 00:11:12.889 "base_bdevs_list": [ 00:11:12.889 { 00:11:12.889 "name": "BaseBdev1", 00:11:12.889 "uuid": "4e86b6b0-3a29-5b82-b0af-2ebeec1986d4", 00:11:12.889 "is_configured": true, 00:11:12.889 "data_offset": 2048, 00:11:12.889 "data_size": 63488 00:11:12.889 }, 00:11:12.889 { 00:11:12.889 "name": "BaseBdev2", 00:11:12.889 "uuid": "cf726168-edc1-5cdd-b7d8-445aa95917c9", 00:11:12.889 "is_configured": true, 00:11:12.889 "data_offset": 2048, 00:11:12.889 "data_size": 63488 00:11:12.889 }, 00:11:12.889 { 00:11:12.889 "name": "BaseBdev3", 00:11:12.889 "uuid": "2662a2b7-9d93-57a1-bca7-a2281b6ece2e", 00:11:12.889 "is_configured": true, 00:11:12.889 "data_offset": 2048, 00:11:12.889 "data_size": 63488 00:11:12.889 }, 00:11:12.889 { 00:11:12.889 "name": "BaseBdev4", 00:11:12.889 "uuid": "1d3fb9aa-6bb9-57a3-a85e-5e2ba94f0c19", 00:11:12.889 "is_configured": true, 00:11:12.889 "data_offset": 2048, 00:11:12.889 "data_size": 63488 00:11:12.889 } 00:11:12.889 ] 00:11:12.889 }' 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.889 09:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.148 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:13.148 09:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:13.407 [2024-11-20 09:23:38.626900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.356 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.357 "name": "raid_bdev1", 00:11:14.357 "uuid": "1fd508f3-ee96-4f2b-bce9-6a63d58dd77f", 00:11:14.357 "strip_size_kb": 64, 00:11:14.357 "state": "online", 00:11:14.357 "raid_level": "raid0", 00:11:14.357 "superblock": true, 00:11:14.357 "num_base_bdevs": 4, 00:11:14.357 "num_base_bdevs_discovered": 4, 00:11:14.357 "num_base_bdevs_operational": 4, 00:11:14.357 "base_bdevs_list": [ 00:11:14.357 { 00:11:14.357 "name": "BaseBdev1", 00:11:14.357 "uuid": "4e86b6b0-3a29-5b82-b0af-2ebeec1986d4", 00:11:14.357 "is_configured": true, 00:11:14.357 "data_offset": 2048, 00:11:14.357 "data_size": 63488 00:11:14.357 }, 00:11:14.357 { 00:11:14.357 "name": "BaseBdev2", 00:11:14.357 "uuid": "cf726168-edc1-5cdd-b7d8-445aa95917c9", 00:11:14.357 "is_configured": true, 00:11:14.357 "data_offset": 2048, 00:11:14.357 "data_size": 63488 00:11:14.357 }, 00:11:14.357 { 00:11:14.357 "name": "BaseBdev3", 00:11:14.357 "uuid": "2662a2b7-9d93-57a1-bca7-a2281b6ece2e", 00:11:14.357 "is_configured": true, 00:11:14.357 "data_offset": 2048, 00:11:14.357 "data_size": 63488 00:11:14.357 }, 00:11:14.357 { 00:11:14.357 "name": "BaseBdev4", 00:11:14.357 "uuid": "1d3fb9aa-6bb9-57a3-a85e-5e2ba94f0c19", 00:11:14.357 "is_configured": true, 00:11:14.357 "data_offset": 2048, 00:11:14.357 "data_size": 63488 00:11:14.357 } 00:11:14.357 ] 00:11:14.357 }' 00:11:14.357 09:23:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.357 09:23:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.617 [2024-11-20 09:23:40.024207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.617 [2024-11-20 09:23:40.024246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.617 [2024-11-20 09:23:40.027310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.617 [2024-11-20 09:23:40.027416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.617 [2024-11-20 09:23:40.027500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.617 [2024-11-20 09:23:40.027563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:14.617 { 00:11:14.617 "results": [ 00:11:14.617 { 00:11:14.617 "job": "raid_bdev1", 00:11:14.617 "core_mask": "0x1", 00:11:14.617 "workload": "randrw", 00:11:14.617 "percentage": 50, 00:11:14.617 "status": "finished", 00:11:14.617 "queue_depth": 1, 00:11:14.617 "io_size": 131072, 00:11:14.617 "runtime": 1.397777, 00:11:14.617 "iops": 13766.859806678747, 00:11:14.617 "mibps": 1720.8574758348434, 00:11:14.617 "io_failed": 1, 00:11:14.617 "io_timeout": 0, 00:11:14.617 "avg_latency_us": 100.83942203048144, 00:11:14.617 "min_latency_us": 27.053275109170304, 00:11:14.617 "max_latency_us": 1652.709170305677 00:11:14.617 } 00:11:14.617 ], 00:11:14.617 "core_count": 1 00:11:14.617 } 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71327 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71327 ']' 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71327 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71327 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.617 killing process with pid 71327 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71327' 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71327 00:11:14.617 [2024-11-20 09:23:40.062331] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.617 09:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71327 00:11:15.186 [2024-11-20 09:23:40.433663] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.628 09:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:16.628 09:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:16.628 09:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QMq7GkM2rb 00:11:16.628 09:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:16.629 09:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:16.629 ************************************ 00:11:16.629 END TEST raid_read_error_test 00:11:16.629 ************************************ 00:11:16.629 09:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.629 09:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.629 09:23:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:16.629 00:11:16.629 real 0m5.033s 00:11:16.629 user 0m5.954s 00:11:16.629 sys 0m0.604s 00:11:16.629 09:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.629 09:23:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.629 09:23:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:16.629 09:23:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:16.629 09:23:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.629 09:23:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.629 ************************************ 00:11:16.629 START TEST raid_write_error_test 00:11:16.629 ************************************ 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Z3OvUmGoNi 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71478 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71478 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:16.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71478 ']' 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.629 09:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.629 [2024-11-20 09:23:41.950550] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:11:16.629 [2024-11-20 09:23:41.950779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71478 ] 00:11:16.888 [2024-11-20 09:23:42.115209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.888 [2024-11-20 09:23:42.269577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.147 [2024-11-20 09:23:42.516539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.147 [2024-11-20 09:23:42.516653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 BaseBdev1_malloc 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 true 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 [2024-11-20 09:23:42.948571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.717 [2024-11-20 09:23:42.948633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.717 [2024-11-20 09:23:42.948658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:17.717 [2024-11-20 09:23:42.948671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.717 [2024-11-20 09:23:42.951139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.717 [2024-11-20 09:23:42.951237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.717 BaseBdev1 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 BaseBdev2_malloc 00:11:17.717 09:23:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 true 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 [2024-11-20 09:23:43.016306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.717 [2024-11-20 09:23:43.016372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.717 [2024-11-20 09:23:43.016393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:17.717 [2024-11-20 09:23:43.016406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.717 [2024-11-20 09:23:43.018820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.717 [2024-11-20 09:23:43.018864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.717 BaseBdev2 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 BaseBdev3_malloc 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 true 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 [2024-11-20 09:23:43.093427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:17.717 [2024-11-20 09:23:43.093498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.717 [2024-11-20 09:23:43.093518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:17.717 [2024-11-20 09:23:43.093530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.717 [2024-11-20 09:23:43.095948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.717 [2024-11-20 09:23:43.095993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:17.717 BaseBdev3 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.717 BaseBdev4_malloc 00:11:17.717 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.718 true 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.718 [2024-11-20 09:23:43.154405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:17.718 [2024-11-20 09:23:43.154483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.718 [2024-11-20 09:23:43.154506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:17.718 [2024-11-20 09:23:43.154520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.718 [2024-11-20 09:23:43.156918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.718 [2024-11-20 09:23:43.157029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:17.718 BaseBdev4 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.718 [2024-11-20 09:23:43.162475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.718 [2024-11-20 09:23:43.164563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.718 [2024-11-20 09:23:43.164651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.718 [2024-11-20 09:23:43.164728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.718 [2024-11-20 09:23:43.164990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:17.718 [2024-11-20 09:23:43.165012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.718 [2024-11-20 09:23:43.165298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:17.718 [2024-11-20 09:23:43.165508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:17.718 [2024-11-20 09:23:43.165523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:17.718 [2024-11-20 09:23:43.165708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.718 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.978 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.978 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.978 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.978 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.978 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.978 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.978 "name": "raid_bdev1", 00:11:17.978 "uuid": "c37a2d1f-beda-42b4-8a91-3ee6f36bf5b4", 00:11:17.978 "strip_size_kb": 64, 00:11:17.978 "state": "online", 00:11:17.978 "raid_level": "raid0", 00:11:17.978 "superblock": true, 00:11:17.978 "num_base_bdevs": 4, 00:11:17.978 "num_base_bdevs_discovered": 4, 00:11:17.978 "num_base_bdevs_operational": 4, 00:11:17.978 "base_bdevs_list": [ 00:11:17.978 { 00:11:17.978 "name": "BaseBdev1", 00:11:17.978 "uuid": "4ee61f9e-8c83-51e3-91d3-0fd71d406027", 00:11:17.978 "is_configured": true, 00:11:17.978 "data_offset": 2048, 00:11:17.978 "data_size": 63488 00:11:17.978 }, 00:11:17.978 { 00:11:17.978 "name": "BaseBdev2", 00:11:17.978 "uuid": "3e5358fe-b131-5c64-b84f-1474908f320b", 00:11:17.978 "is_configured": true, 00:11:17.978 "data_offset": 2048, 00:11:17.978 "data_size": 63488 00:11:17.978 }, 00:11:17.978 { 00:11:17.978 "name": "BaseBdev3", 00:11:17.978 "uuid": "71f6598f-6bad-5368-9d5d-76be88f2feba", 00:11:17.978 "is_configured": true, 00:11:17.978 "data_offset": 2048, 00:11:17.978 "data_size": 63488 00:11:17.978 }, 00:11:17.978 { 00:11:17.978 "name": "BaseBdev4", 00:11:17.978 "uuid": "a814b85f-9b61-50a4-9ab4-236e0b90f5ce", 00:11:17.978 "is_configured": true, 00:11:17.978 "data_offset": 2048, 00:11:17.978 "data_size": 63488 00:11:17.978 } 00:11:17.978 ] 00:11:17.978 }' 00:11:17.978 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.978 09:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.237 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:18.237 09:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:18.496 [2024-11-20 09:23:43.775204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.433 "name": "raid_bdev1", 00:11:19.433 "uuid": "c37a2d1f-beda-42b4-8a91-3ee6f36bf5b4", 00:11:19.433 "strip_size_kb": 64, 00:11:19.433 "state": "online", 00:11:19.433 "raid_level": "raid0", 00:11:19.433 "superblock": true, 00:11:19.433 "num_base_bdevs": 4, 00:11:19.433 "num_base_bdevs_discovered": 4, 00:11:19.433 "num_base_bdevs_operational": 4, 00:11:19.433 "base_bdevs_list": [ 00:11:19.433 { 00:11:19.433 "name": "BaseBdev1", 00:11:19.433 "uuid": "4ee61f9e-8c83-51e3-91d3-0fd71d406027", 00:11:19.433 "is_configured": true, 00:11:19.433 "data_offset": 2048, 00:11:19.433 "data_size": 63488 00:11:19.433 }, 00:11:19.433 { 00:11:19.433 "name": "BaseBdev2", 00:11:19.433 "uuid": "3e5358fe-b131-5c64-b84f-1474908f320b", 00:11:19.433 "is_configured": true, 00:11:19.433 "data_offset": 2048, 00:11:19.433 "data_size": 63488 00:11:19.433 }, 00:11:19.433 { 00:11:19.433 "name": "BaseBdev3", 00:11:19.433 "uuid": "71f6598f-6bad-5368-9d5d-76be88f2feba", 00:11:19.433 "is_configured": true, 00:11:19.433 "data_offset": 2048, 00:11:19.433 "data_size": 63488 00:11:19.433 }, 00:11:19.433 { 00:11:19.433 "name": "BaseBdev4", 00:11:19.433 "uuid": "a814b85f-9b61-50a4-9ab4-236e0b90f5ce", 00:11:19.433 "is_configured": true, 00:11:19.433 "data_offset": 2048, 00:11:19.433 "data_size": 63488 00:11:19.433 } 00:11:19.433 ] 00:11:19.433 }' 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.433 09:23:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.000 09:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.000 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.001 [2024-11-20 09:23:45.152641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.001 [2024-11-20 09:23:45.152681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.001 [2024-11-20 09:23:45.155988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.001 [2024-11-20 09:23:45.156094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.001 [2024-11-20 09:23:45.156175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.001 [2024-11-20 09:23:45.156230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:20.001 { 00:11:20.001 "results": [ 00:11:20.001 { 00:11:20.001 "job": "raid_bdev1", 00:11:20.001 "core_mask": "0x1", 00:11:20.001 "workload": "randrw", 00:11:20.001 "percentage": 50, 00:11:20.001 "status": "finished", 00:11:20.001 "queue_depth": 1, 00:11:20.001 "io_size": 131072, 00:11:20.001 "runtime": 1.377774, 00:11:20.001 "iops": 13126.971477179857, 00:11:20.001 "mibps": 1640.8714346474821, 00:11:20.001 "io_failed": 1, 00:11:20.001 "io_timeout": 0, 00:11:20.001 "avg_latency_us": 105.72584917681955, 00:11:20.001 "min_latency_us": 29.512663755458515, 00:11:20.001 "max_latency_us": 1745.7187772925763 00:11:20.001 } 00:11:20.001 ], 00:11:20.001 "core_count": 1 00:11:20.001 } 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71478 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71478 ']' 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71478 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71478 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71478' 00:11:20.001 killing process with pid 71478 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71478 00:11:20.001 [2024-11-20 09:23:45.193135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.001 09:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71478 00:11:20.260 [2024-11-20 09:23:45.586763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.638 09:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:21.638 09:23:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Z3OvUmGoNi 00:11:21.638 09:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:21.638 09:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:21.638 09:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:21.638 09:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.638 09:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.638 09:23:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:21.638 00:11:21.638 real 0m5.167s 00:11:21.638 user 0m6.149s 00:11:21.638 sys 0m0.601s 00:11:21.638 09:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.638 09:23:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.638 ************************************ 00:11:21.638 END TEST raid_write_error_test 00:11:21.638 ************************************ 00:11:21.638 09:23:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:21.638 09:23:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:21.638 09:23:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.638 09:23:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.638 09:23:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.638 ************************************ 00:11:21.638 START TEST raid_state_function_test 00:11:21.638 ************************************ 00:11:21.638 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:21.638 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:21.638 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71627 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71627' 00:11:21.639 Process raid pid: 71627 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71627 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71627 ']' 00:11:21.639 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.897 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.897 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.897 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.897 09:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.897 [2024-11-20 09:23:47.180054] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:11:21.897 [2024-11-20 09:23:47.180297] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.897 [2024-11-20 09:23:47.345507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.156 [2024-11-20 09:23:47.484499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.415 [2024-11-20 09:23:47.730600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.415 [2024-11-20 09:23:47.730764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 [2024-11-20 09:23:48.083903] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.720 [2024-11-20 09:23:48.084012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.720 [2024-11-20 09:23:48.084058] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.720 [2024-11-20 09:23:48.084097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.720 [2024-11-20 09:23:48.084135] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.720 [2024-11-20 09:23:48.084170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.720 [2024-11-20 09:23:48.084201] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.720 [2024-11-20 09:23:48.084233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.720 "name": "Existed_Raid", 00:11:22.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.720 "strip_size_kb": 64, 00:11:22.720 "state": "configuring", 00:11:22.720 "raid_level": "concat", 00:11:22.720 "superblock": false, 00:11:22.720 "num_base_bdevs": 4, 00:11:22.720 "num_base_bdevs_discovered": 0, 00:11:22.720 "num_base_bdevs_operational": 4, 00:11:22.720 "base_bdevs_list": [ 00:11:22.720 { 00:11:22.720 "name": "BaseBdev1", 00:11:22.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.720 "is_configured": false, 00:11:22.720 "data_offset": 0, 00:11:22.720 "data_size": 0 00:11:22.720 }, 00:11:22.720 { 00:11:22.720 "name": "BaseBdev2", 00:11:22.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.720 "is_configured": false, 00:11:22.720 "data_offset": 0, 00:11:22.720 "data_size": 0 00:11:22.720 }, 00:11:22.720 { 00:11:22.720 "name": "BaseBdev3", 00:11:22.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.720 "is_configured": false, 00:11:22.720 "data_offset": 0, 00:11:22.720 "data_size": 0 00:11:22.720 }, 00:11:22.720 { 00:11:22.720 "name": "BaseBdev4", 00:11:22.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.720 "is_configured": false, 00:11:22.720 "data_offset": 0, 00:11:22.720 "data_size": 0 00:11:22.720 } 00:11:22.720 ] 00:11:22.720 }' 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.720 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.289 [2024-11-20 09:23:48.507228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.289 [2024-11-20 09:23:48.507282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.289 [2024-11-20 09:23:48.519238] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.289 [2024-11-20 09:23:48.519304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.289 [2024-11-20 09:23:48.519319] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.289 [2024-11-20 09:23:48.519336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.289 [2024-11-20 09:23:48.519347] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.289 [2024-11-20 09:23:48.519361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.289 [2024-11-20 09:23:48.519371] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:23.289 [2024-11-20 09:23:48.519384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.289 [2024-11-20 09:23:48.576146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.289 BaseBdev1 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.289 [ 00:11:23.289 { 00:11:23.289 "name": "BaseBdev1", 00:11:23.289 "aliases": [ 00:11:23.289 "b50c2b0d-d4d8-4a89-95bd-ea03b55ac6b6" 00:11:23.289 ], 00:11:23.289 "product_name": "Malloc disk", 00:11:23.289 "block_size": 512, 00:11:23.289 "num_blocks": 65536, 00:11:23.289 "uuid": "b50c2b0d-d4d8-4a89-95bd-ea03b55ac6b6", 00:11:23.289 "assigned_rate_limits": { 00:11:23.289 "rw_ios_per_sec": 0, 00:11:23.289 "rw_mbytes_per_sec": 0, 00:11:23.289 "r_mbytes_per_sec": 0, 00:11:23.289 "w_mbytes_per_sec": 0 00:11:23.289 }, 00:11:23.289 "claimed": true, 00:11:23.289 "claim_type": "exclusive_write", 00:11:23.289 "zoned": false, 00:11:23.289 "supported_io_types": { 00:11:23.289 "read": true, 00:11:23.289 "write": true, 00:11:23.289 "unmap": true, 00:11:23.289 "flush": true, 00:11:23.289 "reset": true, 00:11:23.289 "nvme_admin": false, 00:11:23.289 "nvme_io": false, 00:11:23.289 "nvme_io_md": false, 00:11:23.289 "write_zeroes": true, 00:11:23.289 "zcopy": true, 00:11:23.289 "get_zone_info": false, 00:11:23.289 "zone_management": false, 00:11:23.289 "zone_append": false, 00:11:23.289 "compare": false, 00:11:23.289 "compare_and_write": false, 00:11:23.289 "abort": true, 00:11:23.289 "seek_hole": false, 00:11:23.289 "seek_data": false, 00:11:23.289 "copy": true, 00:11:23.289 "nvme_iov_md": false 00:11:23.289 }, 00:11:23.289 "memory_domains": [ 00:11:23.289 { 00:11:23.289 "dma_device_id": "system", 00:11:23.289 "dma_device_type": 1 00:11:23.289 }, 00:11:23.289 { 00:11:23.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.289 "dma_device_type": 2 00:11:23.289 } 00:11:23.289 ], 00:11:23.289 "driver_specific": {} 00:11:23.289 } 00:11:23.289 ] 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.289 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.290 "name": "Existed_Raid", 00:11:23.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.290 "strip_size_kb": 64, 00:11:23.290 "state": "configuring", 00:11:23.290 "raid_level": "concat", 00:11:23.290 "superblock": false, 00:11:23.290 "num_base_bdevs": 4, 00:11:23.290 "num_base_bdevs_discovered": 1, 00:11:23.290 "num_base_bdevs_operational": 4, 00:11:23.290 "base_bdevs_list": [ 00:11:23.290 { 00:11:23.290 "name": "BaseBdev1", 00:11:23.290 "uuid": "b50c2b0d-d4d8-4a89-95bd-ea03b55ac6b6", 00:11:23.290 "is_configured": true, 00:11:23.290 "data_offset": 0, 00:11:23.290 "data_size": 65536 00:11:23.290 }, 00:11:23.290 { 00:11:23.290 "name": "BaseBdev2", 00:11:23.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.290 "is_configured": false, 00:11:23.290 "data_offset": 0, 00:11:23.290 "data_size": 0 00:11:23.290 }, 00:11:23.290 { 00:11:23.290 "name": "BaseBdev3", 00:11:23.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.290 "is_configured": false, 00:11:23.290 "data_offset": 0, 00:11:23.290 "data_size": 0 00:11:23.290 }, 00:11:23.290 { 00:11:23.290 "name": "BaseBdev4", 00:11:23.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.290 "is_configured": false, 00:11:23.290 "data_offset": 0, 00:11:23.290 "data_size": 0 00:11:23.290 } 00:11:23.290 ] 00:11:23.290 }' 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.290 09:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.865 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.865 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.865 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.866 [2024-11-20 09:23:49.091341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.866 [2024-11-20 09:23:49.091412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.866 [2024-11-20 09:23:49.103387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.866 [2024-11-20 09:23:49.105534] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.866 [2024-11-20 09:23:49.105583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.866 [2024-11-20 09:23:49.105595] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.866 [2024-11-20 09:23:49.105608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.866 [2024-11-20 09:23:49.105616] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:23.866 [2024-11-20 09:23:49.105626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.866 "name": "Existed_Raid", 00:11:23.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.866 "strip_size_kb": 64, 00:11:23.866 "state": "configuring", 00:11:23.866 "raid_level": "concat", 00:11:23.866 "superblock": false, 00:11:23.866 "num_base_bdevs": 4, 00:11:23.866 "num_base_bdevs_discovered": 1, 00:11:23.866 "num_base_bdevs_operational": 4, 00:11:23.866 "base_bdevs_list": [ 00:11:23.866 { 00:11:23.866 "name": "BaseBdev1", 00:11:23.866 "uuid": "b50c2b0d-d4d8-4a89-95bd-ea03b55ac6b6", 00:11:23.866 "is_configured": true, 00:11:23.866 "data_offset": 0, 00:11:23.866 "data_size": 65536 00:11:23.866 }, 00:11:23.866 { 00:11:23.866 "name": "BaseBdev2", 00:11:23.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.866 "is_configured": false, 00:11:23.866 "data_offset": 0, 00:11:23.866 "data_size": 0 00:11:23.866 }, 00:11:23.866 { 00:11:23.866 "name": "BaseBdev3", 00:11:23.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.866 "is_configured": false, 00:11:23.866 "data_offset": 0, 00:11:23.866 "data_size": 0 00:11:23.866 }, 00:11:23.866 { 00:11:23.866 "name": "BaseBdev4", 00:11:23.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.866 "is_configured": false, 00:11:23.866 "data_offset": 0, 00:11:23.866 "data_size": 0 00:11:23.866 } 00:11:23.866 ] 00:11:23.866 }' 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.866 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.443 [2024-11-20 09:23:49.643933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.443 BaseBdev2 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.443 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.443 [ 00:11:24.443 { 00:11:24.444 "name": "BaseBdev2", 00:11:24.444 "aliases": [ 00:11:24.444 "9ee8c1db-7869-4334-80c6-8761a7cb4f29" 00:11:24.444 ], 00:11:24.444 "product_name": "Malloc disk", 00:11:24.444 "block_size": 512, 00:11:24.444 "num_blocks": 65536, 00:11:24.444 "uuid": "9ee8c1db-7869-4334-80c6-8761a7cb4f29", 00:11:24.444 "assigned_rate_limits": { 00:11:24.444 "rw_ios_per_sec": 0, 00:11:24.444 "rw_mbytes_per_sec": 0, 00:11:24.444 "r_mbytes_per_sec": 0, 00:11:24.444 "w_mbytes_per_sec": 0 00:11:24.444 }, 00:11:24.444 "claimed": true, 00:11:24.444 "claim_type": "exclusive_write", 00:11:24.444 "zoned": false, 00:11:24.444 "supported_io_types": { 00:11:24.444 "read": true, 00:11:24.444 "write": true, 00:11:24.444 "unmap": true, 00:11:24.444 "flush": true, 00:11:24.444 "reset": true, 00:11:24.444 "nvme_admin": false, 00:11:24.444 "nvme_io": false, 00:11:24.444 "nvme_io_md": false, 00:11:24.444 "write_zeroes": true, 00:11:24.444 "zcopy": true, 00:11:24.444 "get_zone_info": false, 00:11:24.444 "zone_management": false, 00:11:24.444 "zone_append": false, 00:11:24.444 "compare": false, 00:11:24.444 "compare_and_write": false, 00:11:24.444 "abort": true, 00:11:24.444 "seek_hole": false, 00:11:24.444 "seek_data": false, 00:11:24.444 "copy": true, 00:11:24.444 "nvme_iov_md": false 00:11:24.444 }, 00:11:24.444 "memory_domains": [ 00:11:24.444 { 00:11:24.444 "dma_device_id": "system", 00:11:24.444 "dma_device_type": 1 00:11:24.444 }, 00:11:24.444 { 00:11:24.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.444 "dma_device_type": 2 00:11:24.444 } 00:11:24.444 ], 00:11:24.444 "driver_specific": {} 00:11:24.444 } 00:11:24.444 ] 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.444 "name": "Existed_Raid", 00:11:24.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.444 "strip_size_kb": 64, 00:11:24.444 "state": "configuring", 00:11:24.444 "raid_level": "concat", 00:11:24.444 "superblock": false, 00:11:24.444 "num_base_bdevs": 4, 00:11:24.444 "num_base_bdevs_discovered": 2, 00:11:24.444 "num_base_bdevs_operational": 4, 00:11:24.444 "base_bdevs_list": [ 00:11:24.444 { 00:11:24.444 "name": "BaseBdev1", 00:11:24.444 "uuid": "b50c2b0d-d4d8-4a89-95bd-ea03b55ac6b6", 00:11:24.444 "is_configured": true, 00:11:24.444 "data_offset": 0, 00:11:24.444 "data_size": 65536 00:11:24.444 }, 00:11:24.444 { 00:11:24.444 "name": "BaseBdev2", 00:11:24.444 "uuid": "9ee8c1db-7869-4334-80c6-8761a7cb4f29", 00:11:24.444 "is_configured": true, 00:11:24.444 "data_offset": 0, 00:11:24.444 "data_size": 65536 00:11:24.444 }, 00:11:24.444 { 00:11:24.444 "name": "BaseBdev3", 00:11:24.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.444 "is_configured": false, 00:11:24.444 "data_offset": 0, 00:11:24.444 "data_size": 0 00:11:24.444 }, 00:11:24.444 { 00:11:24.444 "name": "BaseBdev4", 00:11:24.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.444 "is_configured": false, 00:11:24.444 "data_offset": 0, 00:11:24.444 "data_size": 0 00:11:24.444 } 00:11:24.444 ] 00:11:24.444 }' 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.444 09:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.703 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.703 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.703 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 [2024-11-20 09:23:50.210390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.964 BaseBdev3 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 [ 00:11:24.964 { 00:11:24.964 "name": "BaseBdev3", 00:11:24.964 "aliases": [ 00:11:24.964 "88918ddb-537c-4d14-9668-7a5772bbaed6" 00:11:24.964 ], 00:11:24.964 "product_name": "Malloc disk", 00:11:24.964 "block_size": 512, 00:11:24.964 "num_blocks": 65536, 00:11:24.964 "uuid": "88918ddb-537c-4d14-9668-7a5772bbaed6", 00:11:24.964 "assigned_rate_limits": { 00:11:24.964 "rw_ios_per_sec": 0, 00:11:24.964 "rw_mbytes_per_sec": 0, 00:11:24.964 "r_mbytes_per_sec": 0, 00:11:24.964 "w_mbytes_per_sec": 0 00:11:24.964 }, 00:11:24.964 "claimed": true, 00:11:24.964 "claim_type": "exclusive_write", 00:11:24.964 "zoned": false, 00:11:24.964 "supported_io_types": { 00:11:24.964 "read": true, 00:11:24.964 "write": true, 00:11:24.964 "unmap": true, 00:11:24.964 "flush": true, 00:11:24.964 "reset": true, 00:11:24.964 "nvme_admin": false, 00:11:24.964 "nvme_io": false, 00:11:24.964 "nvme_io_md": false, 00:11:24.964 "write_zeroes": true, 00:11:24.964 "zcopy": true, 00:11:24.964 "get_zone_info": false, 00:11:24.964 "zone_management": false, 00:11:24.964 "zone_append": false, 00:11:24.964 "compare": false, 00:11:24.964 "compare_and_write": false, 00:11:24.964 "abort": true, 00:11:24.964 "seek_hole": false, 00:11:24.964 "seek_data": false, 00:11:24.964 "copy": true, 00:11:24.964 "nvme_iov_md": false 00:11:24.964 }, 00:11:24.964 "memory_domains": [ 00:11:24.964 { 00:11:24.964 "dma_device_id": "system", 00:11:24.964 "dma_device_type": 1 00:11:24.964 }, 00:11:24.964 { 00:11:24.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.964 "dma_device_type": 2 00:11:24.964 } 00:11:24.964 ], 00:11:24.964 "driver_specific": {} 00:11:24.964 } 00:11:24.964 ] 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.964 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.964 "name": "Existed_Raid", 00:11:24.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.964 "strip_size_kb": 64, 00:11:24.964 "state": "configuring", 00:11:24.964 "raid_level": "concat", 00:11:24.964 "superblock": false, 00:11:24.964 "num_base_bdevs": 4, 00:11:24.964 "num_base_bdevs_discovered": 3, 00:11:24.964 "num_base_bdevs_operational": 4, 00:11:24.964 "base_bdevs_list": [ 00:11:24.964 { 00:11:24.964 "name": "BaseBdev1", 00:11:24.964 "uuid": "b50c2b0d-d4d8-4a89-95bd-ea03b55ac6b6", 00:11:24.964 "is_configured": true, 00:11:24.964 "data_offset": 0, 00:11:24.964 "data_size": 65536 00:11:24.964 }, 00:11:24.964 { 00:11:24.964 "name": "BaseBdev2", 00:11:24.964 "uuid": "9ee8c1db-7869-4334-80c6-8761a7cb4f29", 00:11:24.964 "is_configured": true, 00:11:24.965 "data_offset": 0, 00:11:24.965 "data_size": 65536 00:11:24.965 }, 00:11:24.965 { 00:11:24.965 "name": "BaseBdev3", 00:11:24.965 "uuid": "88918ddb-537c-4d14-9668-7a5772bbaed6", 00:11:24.965 "is_configured": true, 00:11:24.965 "data_offset": 0, 00:11:24.965 "data_size": 65536 00:11:24.965 }, 00:11:24.965 { 00:11:24.965 "name": "BaseBdev4", 00:11:24.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.965 "is_configured": false, 00:11:24.965 "data_offset": 0, 00:11:24.965 "data_size": 0 00:11:24.965 } 00:11:24.965 ] 00:11:24.965 }' 00:11:24.965 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.965 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.533 [2024-11-20 09:23:50.735952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.533 [2024-11-20 09:23:50.736043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:25.533 [2024-11-20 09:23:50.736059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:25.533 [2024-11-20 09:23:50.736468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:25.533 [2024-11-20 09:23:50.736728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:25.533 [2024-11-20 09:23:50.736767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:25.533 [2024-11-20 09:23:50.737145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.533 BaseBdev4 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.533 [ 00:11:25.533 { 00:11:25.533 "name": "BaseBdev4", 00:11:25.533 "aliases": [ 00:11:25.533 "88c625d8-3d5c-4172-8e8c-5aa1de823ae8" 00:11:25.533 ], 00:11:25.533 "product_name": "Malloc disk", 00:11:25.533 "block_size": 512, 00:11:25.533 "num_blocks": 65536, 00:11:25.533 "uuid": "88c625d8-3d5c-4172-8e8c-5aa1de823ae8", 00:11:25.533 "assigned_rate_limits": { 00:11:25.533 "rw_ios_per_sec": 0, 00:11:25.533 "rw_mbytes_per_sec": 0, 00:11:25.533 "r_mbytes_per_sec": 0, 00:11:25.533 "w_mbytes_per_sec": 0 00:11:25.533 }, 00:11:25.533 "claimed": true, 00:11:25.533 "claim_type": "exclusive_write", 00:11:25.533 "zoned": false, 00:11:25.533 "supported_io_types": { 00:11:25.533 "read": true, 00:11:25.533 "write": true, 00:11:25.533 "unmap": true, 00:11:25.533 "flush": true, 00:11:25.533 "reset": true, 00:11:25.533 "nvme_admin": false, 00:11:25.533 "nvme_io": false, 00:11:25.533 "nvme_io_md": false, 00:11:25.533 "write_zeroes": true, 00:11:25.533 "zcopy": true, 00:11:25.533 "get_zone_info": false, 00:11:25.533 "zone_management": false, 00:11:25.533 "zone_append": false, 00:11:25.533 "compare": false, 00:11:25.533 "compare_and_write": false, 00:11:25.533 "abort": true, 00:11:25.533 "seek_hole": false, 00:11:25.533 "seek_data": false, 00:11:25.533 "copy": true, 00:11:25.533 "nvme_iov_md": false 00:11:25.533 }, 00:11:25.533 "memory_domains": [ 00:11:25.533 { 00:11:25.533 "dma_device_id": "system", 00:11:25.533 "dma_device_type": 1 00:11:25.533 }, 00:11:25.533 { 00:11:25.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.533 "dma_device_type": 2 00:11:25.533 } 00:11:25.533 ], 00:11:25.533 "driver_specific": {} 00:11:25.533 } 00:11:25.533 ] 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.533 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.534 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.534 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.534 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.534 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.534 "name": "Existed_Raid", 00:11:25.534 "uuid": "9a68928b-fab7-4547-aad3-345290e13d50", 00:11:25.534 "strip_size_kb": 64, 00:11:25.534 "state": "online", 00:11:25.534 "raid_level": "concat", 00:11:25.534 "superblock": false, 00:11:25.534 "num_base_bdevs": 4, 00:11:25.534 "num_base_bdevs_discovered": 4, 00:11:25.534 "num_base_bdevs_operational": 4, 00:11:25.534 "base_bdevs_list": [ 00:11:25.534 { 00:11:25.534 "name": "BaseBdev1", 00:11:25.534 "uuid": "b50c2b0d-d4d8-4a89-95bd-ea03b55ac6b6", 00:11:25.534 "is_configured": true, 00:11:25.534 "data_offset": 0, 00:11:25.534 "data_size": 65536 00:11:25.534 }, 00:11:25.534 { 00:11:25.534 "name": "BaseBdev2", 00:11:25.534 "uuid": "9ee8c1db-7869-4334-80c6-8761a7cb4f29", 00:11:25.534 "is_configured": true, 00:11:25.534 "data_offset": 0, 00:11:25.534 "data_size": 65536 00:11:25.534 }, 00:11:25.534 { 00:11:25.534 "name": "BaseBdev3", 00:11:25.534 "uuid": "88918ddb-537c-4d14-9668-7a5772bbaed6", 00:11:25.534 "is_configured": true, 00:11:25.534 "data_offset": 0, 00:11:25.534 "data_size": 65536 00:11:25.534 }, 00:11:25.534 { 00:11:25.534 "name": "BaseBdev4", 00:11:25.534 "uuid": "88c625d8-3d5c-4172-8e8c-5aa1de823ae8", 00:11:25.534 "is_configured": true, 00:11:25.534 "data_offset": 0, 00:11:25.534 "data_size": 65536 00:11:25.534 } 00:11:25.534 ] 00:11:25.534 }' 00:11:25.534 09:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.534 09:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.794 [2024-11-20 09:23:51.235638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.053 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.053 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.053 "name": "Existed_Raid", 00:11:26.053 "aliases": [ 00:11:26.053 "9a68928b-fab7-4547-aad3-345290e13d50" 00:11:26.053 ], 00:11:26.053 "product_name": "Raid Volume", 00:11:26.053 "block_size": 512, 00:11:26.053 "num_blocks": 262144, 00:11:26.053 "uuid": "9a68928b-fab7-4547-aad3-345290e13d50", 00:11:26.053 "assigned_rate_limits": { 00:11:26.053 "rw_ios_per_sec": 0, 00:11:26.053 "rw_mbytes_per_sec": 0, 00:11:26.053 "r_mbytes_per_sec": 0, 00:11:26.053 "w_mbytes_per_sec": 0 00:11:26.053 }, 00:11:26.053 "claimed": false, 00:11:26.053 "zoned": false, 00:11:26.053 "supported_io_types": { 00:11:26.053 "read": true, 00:11:26.053 "write": true, 00:11:26.053 "unmap": true, 00:11:26.053 "flush": true, 00:11:26.053 "reset": true, 00:11:26.053 "nvme_admin": false, 00:11:26.053 "nvme_io": false, 00:11:26.053 "nvme_io_md": false, 00:11:26.053 "write_zeroes": true, 00:11:26.053 "zcopy": false, 00:11:26.053 "get_zone_info": false, 00:11:26.053 "zone_management": false, 00:11:26.053 "zone_append": false, 00:11:26.053 "compare": false, 00:11:26.053 "compare_and_write": false, 00:11:26.053 "abort": false, 00:11:26.053 "seek_hole": false, 00:11:26.053 "seek_data": false, 00:11:26.053 "copy": false, 00:11:26.053 "nvme_iov_md": false 00:11:26.053 }, 00:11:26.053 "memory_domains": [ 00:11:26.053 { 00:11:26.053 "dma_device_id": "system", 00:11:26.053 "dma_device_type": 1 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.053 "dma_device_type": 2 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "dma_device_id": "system", 00:11:26.053 "dma_device_type": 1 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.053 "dma_device_type": 2 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "dma_device_id": "system", 00:11:26.053 "dma_device_type": 1 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.053 "dma_device_type": 2 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "dma_device_id": "system", 00:11:26.053 "dma_device_type": 1 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.053 "dma_device_type": 2 00:11:26.053 } 00:11:26.053 ], 00:11:26.053 "driver_specific": { 00:11:26.053 "raid": { 00:11:26.053 "uuid": "9a68928b-fab7-4547-aad3-345290e13d50", 00:11:26.053 "strip_size_kb": 64, 00:11:26.053 "state": "online", 00:11:26.053 "raid_level": "concat", 00:11:26.053 "superblock": false, 00:11:26.053 "num_base_bdevs": 4, 00:11:26.053 "num_base_bdevs_discovered": 4, 00:11:26.053 "num_base_bdevs_operational": 4, 00:11:26.053 "base_bdevs_list": [ 00:11:26.053 { 00:11:26.053 "name": "BaseBdev1", 00:11:26.053 "uuid": "b50c2b0d-d4d8-4a89-95bd-ea03b55ac6b6", 00:11:26.053 "is_configured": true, 00:11:26.053 "data_offset": 0, 00:11:26.053 "data_size": 65536 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "name": "BaseBdev2", 00:11:26.053 "uuid": "9ee8c1db-7869-4334-80c6-8761a7cb4f29", 00:11:26.053 "is_configured": true, 00:11:26.053 "data_offset": 0, 00:11:26.053 "data_size": 65536 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "name": "BaseBdev3", 00:11:26.053 "uuid": "88918ddb-537c-4d14-9668-7a5772bbaed6", 00:11:26.053 "is_configured": true, 00:11:26.053 "data_offset": 0, 00:11:26.053 "data_size": 65536 00:11:26.053 }, 00:11:26.053 { 00:11:26.053 "name": "BaseBdev4", 00:11:26.053 "uuid": "88c625d8-3d5c-4172-8e8c-5aa1de823ae8", 00:11:26.053 "is_configured": true, 00:11:26.053 "data_offset": 0, 00:11:26.053 "data_size": 65536 00:11:26.053 } 00:11:26.053 ] 00:11:26.054 } 00:11:26.054 } 00:11:26.054 }' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:26.054 BaseBdev2 00:11:26.054 BaseBdev3 00:11:26.054 BaseBdev4' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.054 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.313 [2024-11-20 09:23:51.546767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.313 [2024-11-20 09:23:51.546810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.313 [2024-11-20 09:23:51.546872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.313 "name": "Existed_Raid", 00:11:26.313 "uuid": "9a68928b-fab7-4547-aad3-345290e13d50", 00:11:26.313 "strip_size_kb": 64, 00:11:26.313 "state": "offline", 00:11:26.313 "raid_level": "concat", 00:11:26.313 "superblock": false, 00:11:26.313 "num_base_bdevs": 4, 00:11:26.313 "num_base_bdevs_discovered": 3, 00:11:26.313 "num_base_bdevs_operational": 3, 00:11:26.313 "base_bdevs_list": [ 00:11:26.313 { 00:11:26.313 "name": null, 00:11:26.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.313 "is_configured": false, 00:11:26.313 "data_offset": 0, 00:11:26.313 "data_size": 65536 00:11:26.313 }, 00:11:26.313 { 00:11:26.313 "name": "BaseBdev2", 00:11:26.313 "uuid": "9ee8c1db-7869-4334-80c6-8761a7cb4f29", 00:11:26.313 "is_configured": true, 00:11:26.313 "data_offset": 0, 00:11:26.313 "data_size": 65536 00:11:26.313 }, 00:11:26.313 { 00:11:26.313 "name": "BaseBdev3", 00:11:26.313 "uuid": "88918ddb-537c-4d14-9668-7a5772bbaed6", 00:11:26.313 "is_configured": true, 00:11:26.313 "data_offset": 0, 00:11:26.313 "data_size": 65536 00:11:26.313 }, 00:11:26.313 { 00:11:26.313 "name": "BaseBdev4", 00:11:26.313 "uuid": "88c625d8-3d5c-4172-8e8c-5aa1de823ae8", 00:11:26.313 "is_configured": true, 00:11:26.313 "data_offset": 0, 00:11:26.313 "data_size": 65536 00:11:26.313 } 00:11:26.313 ] 00:11:26.313 }' 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.313 09:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.880 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:26.880 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.880 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.880 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.880 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.880 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.880 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.881 [2024-11-20 09:23:52.189847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.881 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.138 [2024-11-20 09:23:52.361004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.138 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.139 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:27.139 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.139 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.139 [2024-11-20 09:23:52.527474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:27.139 [2024-11-20 09:23:52.527532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.396 BaseBdev2 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.396 [ 00:11:27.396 { 00:11:27.396 "name": "BaseBdev2", 00:11:27.396 "aliases": [ 00:11:27.396 "81760aa0-9019-4d4a-bf66-aab2b5eba92d" 00:11:27.396 ], 00:11:27.396 "product_name": "Malloc disk", 00:11:27.396 "block_size": 512, 00:11:27.396 "num_blocks": 65536, 00:11:27.396 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:27.396 "assigned_rate_limits": { 00:11:27.396 "rw_ios_per_sec": 0, 00:11:27.396 "rw_mbytes_per_sec": 0, 00:11:27.396 "r_mbytes_per_sec": 0, 00:11:27.396 "w_mbytes_per_sec": 0 00:11:27.396 }, 00:11:27.396 "claimed": false, 00:11:27.396 "zoned": false, 00:11:27.396 "supported_io_types": { 00:11:27.396 "read": true, 00:11:27.396 "write": true, 00:11:27.396 "unmap": true, 00:11:27.396 "flush": true, 00:11:27.396 "reset": true, 00:11:27.396 "nvme_admin": false, 00:11:27.396 "nvme_io": false, 00:11:27.396 "nvme_io_md": false, 00:11:27.396 "write_zeroes": true, 00:11:27.396 "zcopy": true, 00:11:27.396 "get_zone_info": false, 00:11:27.396 "zone_management": false, 00:11:27.396 "zone_append": false, 00:11:27.396 "compare": false, 00:11:27.396 "compare_and_write": false, 00:11:27.396 "abort": true, 00:11:27.396 "seek_hole": false, 00:11:27.396 "seek_data": false, 00:11:27.396 "copy": true, 00:11:27.396 "nvme_iov_md": false 00:11:27.396 }, 00:11:27.396 "memory_domains": [ 00:11:27.396 { 00:11:27.396 "dma_device_id": "system", 00:11:27.396 "dma_device_type": 1 00:11:27.396 }, 00:11:27.396 { 00:11:27.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.396 "dma_device_type": 2 00:11:27.396 } 00:11:27.396 ], 00:11:27.396 "driver_specific": {} 00:11:27.396 } 00:11:27.396 ] 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.396 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.396 BaseBdev3 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.397 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.397 [ 00:11:27.397 { 00:11:27.397 "name": "BaseBdev3", 00:11:27.397 "aliases": [ 00:11:27.397 "de654ff4-c7e3-46ea-b51b-bd83180cf22a" 00:11:27.397 ], 00:11:27.397 "product_name": "Malloc disk", 00:11:27.397 "block_size": 512, 00:11:27.397 "num_blocks": 65536, 00:11:27.397 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:27.655 "assigned_rate_limits": { 00:11:27.655 "rw_ios_per_sec": 0, 00:11:27.655 "rw_mbytes_per_sec": 0, 00:11:27.655 "r_mbytes_per_sec": 0, 00:11:27.655 "w_mbytes_per_sec": 0 00:11:27.655 }, 00:11:27.655 "claimed": false, 00:11:27.655 "zoned": false, 00:11:27.655 "supported_io_types": { 00:11:27.655 "read": true, 00:11:27.655 "write": true, 00:11:27.655 "unmap": true, 00:11:27.655 "flush": true, 00:11:27.655 "reset": true, 00:11:27.655 "nvme_admin": false, 00:11:27.655 "nvme_io": false, 00:11:27.655 "nvme_io_md": false, 00:11:27.655 "write_zeroes": true, 00:11:27.655 "zcopy": true, 00:11:27.655 "get_zone_info": false, 00:11:27.655 "zone_management": false, 00:11:27.655 "zone_append": false, 00:11:27.655 "compare": false, 00:11:27.655 "compare_and_write": false, 00:11:27.655 "abort": true, 00:11:27.655 "seek_hole": false, 00:11:27.655 "seek_data": false, 00:11:27.655 "copy": true, 00:11:27.655 "nvme_iov_md": false 00:11:27.655 }, 00:11:27.655 "memory_domains": [ 00:11:27.655 { 00:11:27.655 "dma_device_id": "system", 00:11:27.655 "dma_device_type": 1 00:11:27.655 }, 00:11:27.655 { 00:11:27.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.655 "dma_device_type": 2 00:11:27.655 } 00:11:27.655 ], 00:11:27.655 "driver_specific": {} 00:11:27.655 } 00:11:27.655 ] 00:11:27.655 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.655 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.655 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.655 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.655 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:27.655 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.655 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.656 BaseBdev4 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.656 [ 00:11:27.656 { 00:11:27.656 "name": "BaseBdev4", 00:11:27.656 "aliases": [ 00:11:27.656 "4701a31d-ffe6-4fcf-9ed8-39845da59d9f" 00:11:27.656 ], 00:11:27.656 "product_name": "Malloc disk", 00:11:27.656 "block_size": 512, 00:11:27.656 "num_blocks": 65536, 00:11:27.656 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:27.656 "assigned_rate_limits": { 00:11:27.656 "rw_ios_per_sec": 0, 00:11:27.656 "rw_mbytes_per_sec": 0, 00:11:27.656 "r_mbytes_per_sec": 0, 00:11:27.656 "w_mbytes_per_sec": 0 00:11:27.656 }, 00:11:27.656 "claimed": false, 00:11:27.656 "zoned": false, 00:11:27.656 "supported_io_types": { 00:11:27.656 "read": true, 00:11:27.656 "write": true, 00:11:27.656 "unmap": true, 00:11:27.656 "flush": true, 00:11:27.656 "reset": true, 00:11:27.656 "nvme_admin": false, 00:11:27.656 "nvme_io": false, 00:11:27.656 "nvme_io_md": false, 00:11:27.656 "write_zeroes": true, 00:11:27.656 "zcopy": true, 00:11:27.656 "get_zone_info": false, 00:11:27.656 "zone_management": false, 00:11:27.656 "zone_append": false, 00:11:27.656 "compare": false, 00:11:27.656 "compare_and_write": false, 00:11:27.656 "abort": true, 00:11:27.656 "seek_hole": false, 00:11:27.656 "seek_data": false, 00:11:27.656 "copy": true, 00:11:27.656 "nvme_iov_md": false 00:11:27.656 }, 00:11:27.656 "memory_domains": [ 00:11:27.656 { 00:11:27.656 "dma_device_id": "system", 00:11:27.656 "dma_device_type": 1 00:11:27.656 }, 00:11:27.656 { 00:11:27.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.656 "dma_device_type": 2 00:11:27.656 } 00:11:27.656 ], 00:11:27.656 "driver_specific": {} 00:11:27.656 } 00:11:27.656 ] 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.656 [2024-11-20 09:23:52.945066] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.656 [2024-11-20 09:23:52.945122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.656 [2024-11-20 09:23:52.945151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.656 [2024-11-20 09:23:52.947189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.656 [2024-11-20 09:23:52.947253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.656 09:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.656 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.656 "name": "Existed_Raid", 00:11:27.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.656 "strip_size_kb": 64, 00:11:27.656 "state": "configuring", 00:11:27.656 "raid_level": "concat", 00:11:27.656 "superblock": false, 00:11:27.656 "num_base_bdevs": 4, 00:11:27.656 "num_base_bdevs_discovered": 3, 00:11:27.656 "num_base_bdevs_operational": 4, 00:11:27.656 "base_bdevs_list": [ 00:11:27.656 { 00:11:27.656 "name": "BaseBdev1", 00:11:27.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.656 "is_configured": false, 00:11:27.656 "data_offset": 0, 00:11:27.656 "data_size": 0 00:11:27.656 }, 00:11:27.656 { 00:11:27.656 "name": "BaseBdev2", 00:11:27.656 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:27.656 "is_configured": true, 00:11:27.656 "data_offset": 0, 00:11:27.656 "data_size": 65536 00:11:27.656 }, 00:11:27.656 { 00:11:27.656 "name": "BaseBdev3", 00:11:27.656 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:27.656 "is_configured": true, 00:11:27.656 "data_offset": 0, 00:11:27.656 "data_size": 65536 00:11:27.656 }, 00:11:27.656 { 00:11:27.656 "name": "BaseBdev4", 00:11:27.656 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:27.656 "is_configured": true, 00:11:27.656 "data_offset": 0, 00:11:27.656 "data_size": 65536 00:11:27.656 } 00:11:27.656 ] 00:11:27.656 }' 00:11:27.656 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.656 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.223 [2024-11-20 09:23:53.432279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.223 "name": "Existed_Raid", 00:11:28.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.223 "strip_size_kb": 64, 00:11:28.223 "state": "configuring", 00:11:28.223 "raid_level": "concat", 00:11:28.223 "superblock": false, 00:11:28.223 "num_base_bdevs": 4, 00:11:28.223 "num_base_bdevs_discovered": 2, 00:11:28.223 "num_base_bdevs_operational": 4, 00:11:28.223 "base_bdevs_list": [ 00:11:28.223 { 00:11:28.223 "name": "BaseBdev1", 00:11:28.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.223 "is_configured": false, 00:11:28.223 "data_offset": 0, 00:11:28.223 "data_size": 0 00:11:28.223 }, 00:11:28.223 { 00:11:28.223 "name": null, 00:11:28.223 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:28.223 "is_configured": false, 00:11:28.223 "data_offset": 0, 00:11:28.223 "data_size": 65536 00:11:28.223 }, 00:11:28.223 { 00:11:28.223 "name": "BaseBdev3", 00:11:28.223 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:28.223 "is_configured": true, 00:11:28.223 "data_offset": 0, 00:11:28.223 "data_size": 65536 00:11:28.223 }, 00:11:28.223 { 00:11:28.223 "name": "BaseBdev4", 00:11:28.223 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:28.223 "is_configured": true, 00:11:28.223 "data_offset": 0, 00:11:28.223 "data_size": 65536 00:11:28.223 } 00:11:28.223 ] 00:11:28.223 }' 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.223 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.482 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.482 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.482 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.482 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.482 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.482 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:28.482 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.482 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.482 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.768 [2024-11-20 09:23:53.957922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.768 BaseBdev1 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.768 [ 00:11:28.768 { 00:11:28.768 "name": "BaseBdev1", 00:11:28.768 "aliases": [ 00:11:28.768 "9c7fee50-0134-477a-94bc-df05ff4e8450" 00:11:28.768 ], 00:11:28.768 "product_name": "Malloc disk", 00:11:28.768 "block_size": 512, 00:11:28.768 "num_blocks": 65536, 00:11:28.768 "uuid": "9c7fee50-0134-477a-94bc-df05ff4e8450", 00:11:28.768 "assigned_rate_limits": { 00:11:28.768 "rw_ios_per_sec": 0, 00:11:28.768 "rw_mbytes_per_sec": 0, 00:11:28.768 "r_mbytes_per_sec": 0, 00:11:28.768 "w_mbytes_per_sec": 0 00:11:28.768 }, 00:11:28.768 "claimed": true, 00:11:28.768 "claim_type": "exclusive_write", 00:11:28.768 "zoned": false, 00:11:28.768 "supported_io_types": { 00:11:28.768 "read": true, 00:11:28.768 "write": true, 00:11:28.768 "unmap": true, 00:11:28.768 "flush": true, 00:11:28.768 "reset": true, 00:11:28.768 "nvme_admin": false, 00:11:28.768 "nvme_io": false, 00:11:28.768 "nvme_io_md": false, 00:11:28.768 "write_zeroes": true, 00:11:28.768 "zcopy": true, 00:11:28.768 "get_zone_info": false, 00:11:28.768 "zone_management": false, 00:11:28.768 "zone_append": false, 00:11:28.768 "compare": false, 00:11:28.768 "compare_and_write": false, 00:11:28.768 "abort": true, 00:11:28.768 "seek_hole": false, 00:11:28.768 "seek_data": false, 00:11:28.768 "copy": true, 00:11:28.768 "nvme_iov_md": false 00:11:28.768 }, 00:11:28.768 "memory_domains": [ 00:11:28.768 { 00:11:28.768 "dma_device_id": "system", 00:11:28.768 "dma_device_type": 1 00:11:28.768 }, 00:11:28.768 { 00:11:28.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.768 "dma_device_type": 2 00:11:28.768 } 00:11:28.768 ], 00:11:28.768 "driver_specific": {} 00:11:28.768 } 00:11:28.768 ] 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.768 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.769 09:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.769 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.769 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.769 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.769 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.769 "name": "Existed_Raid", 00:11:28.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.769 "strip_size_kb": 64, 00:11:28.769 "state": "configuring", 00:11:28.769 "raid_level": "concat", 00:11:28.769 "superblock": false, 00:11:28.769 "num_base_bdevs": 4, 00:11:28.769 "num_base_bdevs_discovered": 3, 00:11:28.769 "num_base_bdevs_operational": 4, 00:11:28.769 "base_bdevs_list": [ 00:11:28.769 { 00:11:28.769 "name": "BaseBdev1", 00:11:28.769 "uuid": "9c7fee50-0134-477a-94bc-df05ff4e8450", 00:11:28.769 "is_configured": true, 00:11:28.769 "data_offset": 0, 00:11:28.769 "data_size": 65536 00:11:28.769 }, 00:11:28.769 { 00:11:28.769 "name": null, 00:11:28.769 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:28.769 "is_configured": false, 00:11:28.769 "data_offset": 0, 00:11:28.769 "data_size": 65536 00:11:28.769 }, 00:11:28.769 { 00:11:28.769 "name": "BaseBdev3", 00:11:28.769 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:28.769 "is_configured": true, 00:11:28.769 "data_offset": 0, 00:11:28.769 "data_size": 65536 00:11:28.769 }, 00:11:28.769 { 00:11:28.769 "name": "BaseBdev4", 00:11:28.769 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:28.769 "is_configured": true, 00:11:28.769 "data_offset": 0, 00:11:28.769 "data_size": 65536 00:11:28.769 } 00:11:28.769 ] 00:11:28.769 }' 00:11:28.769 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.769 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.027 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.027 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.027 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.027 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.027 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.286 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:29.286 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:29.286 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.286 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 [2024-11-20 09:23:54.497122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.286 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.286 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.286 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.286 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.286 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.287 "name": "Existed_Raid", 00:11:29.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.287 "strip_size_kb": 64, 00:11:29.287 "state": "configuring", 00:11:29.287 "raid_level": "concat", 00:11:29.287 "superblock": false, 00:11:29.287 "num_base_bdevs": 4, 00:11:29.287 "num_base_bdevs_discovered": 2, 00:11:29.287 "num_base_bdevs_operational": 4, 00:11:29.287 "base_bdevs_list": [ 00:11:29.287 { 00:11:29.287 "name": "BaseBdev1", 00:11:29.287 "uuid": "9c7fee50-0134-477a-94bc-df05ff4e8450", 00:11:29.287 "is_configured": true, 00:11:29.287 "data_offset": 0, 00:11:29.287 "data_size": 65536 00:11:29.287 }, 00:11:29.287 { 00:11:29.287 "name": null, 00:11:29.287 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:29.287 "is_configured": false, 00:11:29.287 "data_offset": 0, 00:11:29.287 "data_size": 65536 00:11:29.287 }, 00:11:29.287 { 00:11:29.287 "name": null, 00:11:29.287 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:29.287 "is_configured": false, 00:11:29.287 "data_offset": 0, 00:11:29.287 "data_size": 65536 00:11:29.287 }, 00:11:29.287 { 00:11:29.287 "name": "BaseBdev4", 00:11:29.287 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:29.287 "is_configured": true, 00:11:29.287 "data_offset": 0, 00:11:29.287 "data_size": 65536 00:11:29.287 } 00:11:29.287 ] 00:11:29.287 }' 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.287 09:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.855 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.855 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.855 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.855 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.856 [2024-11-20 09:23:55.044205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.856 "name": "Existed_Raid", 00:11:29.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.856 "strip_size_kb": 64, 00:11:29.856 "state": "configuring", 00:11:29.856 "raid_level": "concat", 00:11:29.856 "superblock": false, 00:11:29.856 "num_base_bdevs": 4, 00:11:29.856 "num_base_bdevs_discovered": 3, 00:11:29.856 "num_base_bdevs_operational": 4, 00:11:29.856 "base_bdevs_list": [ 00:11:29.856 { 00:11:29.856 "name": "BaseBdev1", 00:11:29.856 "uuid": "9c7fee50-0134-477a-94bc-df05ff4e8450", 00:11:29.856 "is_configured": true, 00:11:29.856 "data_offset": 0, 00:11:29.856 "data_size": 65536 00:11:29.856 }, 00:11:29.856 { 00:11:29.856 "name": null, 00:11:29.856 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:29.856 "is_configured": false, 00:11:29.856 "data_offset": 0, 00:11:29.856 "data_size": 65536 00:11:29.856 }, 00:11:29.856 { 00:11:29.856 "name": "BaseBdev3", 00:11:29.856 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:29.856 "is_configured": true, 00:11:29.856 "data_offset": 0, 00:11:29.856 "data_size": 65536 00:11:29.856 }, 00:11:29.856 { 00:11:29.856 "name": "BaseBdev4", 00:11:29.856 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:29.856 "is_configured": true, 00:11:29.856 "data_offset": 0, 00:11:29.856 "data_size": 65536 00:11:29.856 } 00:11:29.856 ] 00:11:29.856 }' 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.856 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.115 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.115 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.115 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.115 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:30.115 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.115 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.115 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 [2024-11-20 09:23:55.567399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.374 "name": "Existed_Raid", 00:11:30.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.374 "strip_size_kb": 64, 00:11:30.374 "state": "configuring", 00:11:30.374 "raid_level": "concat", 00:11:30.374 "superblock": false, 00:11:30.374 "num_base_bdevs": 4, 00:11:30.374 "num_base_bdevs_discovered": 2, 00:11:30.374 "num_base_bdevs_operational": 4, 00:11:30.374 "base_bdevs_list": [ 00:11:30.374 { 00:11:30.374 "name": null, 00:11:30.374 "uuid": "9c7fee50-0134-477a-94bc-df05ff4e8450", 00:11:30.374 "is_configured": false, 00:11:30.374 "data_offset": 0, 00:11:30.374 "data_size": 65536 00:11:30.374 }, 00:11:30.374 { 00:11:30.374 "name": null, 00:11:30.374 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:30.374 "is_configured": false, 00:11:30.374 "data_offset": 0, 00:11:30.374 "data_size": 65536 00:11:30.374 }, 00:11:30.374 { 00:11:30.374 "name": "BaseBdev3", 00:11:30.374 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:30.374 "is_configured": true, 00:11:30.374 "data_offset": 0, 00:11:30.374 "data_size": 65536 00:11:30.374 }, 00:11:30.374 { 00:11:30.374 "name": "BaseBdev4", 00:11:30.374 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:30.374 "is_configured": true, 00:11:30.374 "data_offset": 0, 00:11:30.374 "data_size": 65536 00:11:30.374 } 00:11:30.374 ] 00:11:30.374 }' 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.374 09:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.943 [2024-11-20 09:23:56.176463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.943 "name": "Existed_Raid", 00:11:30.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.943 "strip_size_kb": 64, 00:11:30.943 "state": "configuring", 00:11:30.943 "raid_level": "concat", 00:11:30.943 "superblock": false, 00:11:30.943 "num_base_bdevs": 4, 00:11:30.943 "num_base_bdevs_discovered": 3, 00:11:30.943 "num_base_bdevs_operational": 4, 00:11:30.943 "base_bdevs_list": [ 00:11:30.943 { 00:11:30.943 "name": null, 00:11:30.943 "uuid": "9c7fee50-0134-477a-94bc-df05ff4e8450", 00:11:30.943 "is_configured": false, 00:11:30.943 "data_offset": 0, 00:11:30.943 "data_size": 65536 00:11:30.943 }, 00:11:30.943 { 00:11:30.943 "name": "BaseBdev2", 00:11:30.943 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:30.943 "is_configured": true, 00:11:30.943 "data_offset": 0, 00:11:30.943 "data_size": 65536 00:11:30.943 }, 00:11:30.943 { 00:11:30.943 "name": "BaseBdev3", 00:11:30.943 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:30.943 "is_configured": true, 00:11:30.943 "data_offset": 0, 00:11:30.943 "data_size": 65536 00:11:30.943 }, 00:11:30.943 { 00:11:30.943 "name": "BaseBdev4", 00:11:30.943 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:30.943 "is_configured": true, 00:11:30.943 "data_offset": 0, 00:11:30.943 "data_size": 65536 00:11:30.943 } 00:11:30.943 ] 00:11:30.943 }' 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.943 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.203 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.203 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.203 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.203 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:31.203 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9c7fee50-0134-477a-94bc-df05ff4e8450 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.463 [2024-11-20 09:23:56.748135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:31.463 [2024-11-20 09:23:56.748200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:31.463 [2024-11-20 09:23:56.748210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:31.463 [2024-11-20 09:23:56.748547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:31.463 [2024-11-20 09:23:56.748747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:31.463 [2024-11-20 09:23:56.748773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:31.463 [2024-11-20 09:23:56.749116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.463 NewBaseBdev 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.463 [ 00:11:31.463 { 00:11:31.463 "name": "NewBaseBdev", 00:11:31.463 "aliases": [ 00:11:31.463 "9c7fee50-0134-477a-94bc-df05ff4e8450" 00:11:31.463 ], 00:11:31.463 "product_name": "Malloc disk", 00:11:31.463 "block_size": 512, 00:11:31.463 "num_blocks": 65536, 00:11:31.463 "uuid": "9c7fee50-0134-477a-94bc-df05ff4e8450", 00:11:31.463 "assigned_rate_limits": { 00:11:31.463 "rw_ios_per_sec": 0, 00:11:31.463 "rw_mbytes_per_sec": 0, 00:11:31.463 "r_mbytes_per_sec": 0, 00:11:31.463 "w_mbytes_per_sec": 0 00:11:31.463 }, 00:11:31.463 "claimed": true, 00:11:31.463 "claim_type": "exclusive_write", 00:11:31.463 "zoned": false, 00:11:31.463 "supported_io_types": { 00:11:31.463 "read": true, 00:11:31.463 "write": true, 00:11:31.463 "unmap": true, 00:11:31.463 "flush": true, 00:11:31.463 "reset": true, 00:11:31.463 "nvme_admin": false, 00:11:31.463 "nvme_io": false, 00:11:31.463 "nvme_io_md": false, 00:11:31.463 "write_zeroes": true, 00:11:31.463 "zcopy": true, 00:11:31.463 "get_zone_info": false, 00:11:31.463 "zone_management": false, 00:11:31.463 "zone_append": false, 00:11:31.463 "compare": false, 00:11:31.463 "compare_and_write": false, 00:11:31.463 "abort": true, 00:11:31.463 "seek_hole": false, 00:11:31.463 "seek_data": false, 00:11:31.463 "copy": true, 00:11:31.463 "nvme_iov_md": false 00:11:31.463 }, 00:11:31.463 "memory_domains": [ 00:11:31.463 { 00:11:31.463 "dma_device_id": "system", 00:11:31.463 "dma_device_type": 1 00:11:31.463 }, 00:11:31.463 { 00:11:31.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.463 "dma_device_type": 2 00:11:31.463 } 00:11:31.463 ], 00:11:31.463 "driver_specific": {} 00:11:31.463 } 00:11:31.463 ] 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.463 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.463 "name": "Existed_Raid", 00:11:31.463 "uuid": "850cb99e-f977-4cd7-8e70-a76a286139a7", 00:11:31.463 "strip_size_kb": 64, 00:11:31.463 "state": "online", 00:11:31.463 "raid_level": "concat", 00:11:31.463 "superblock": false, 00:11:31.463 "num_base_bdevs": 4, 00:11:31.463 "num_base_bdevs_discovered": 4, 00:11:31.463 "num_base_bdevs_operational": 4, 00:11:31.463 "base_bdevs_list": [ 00:11:31.463 { 00:11:31.463 "name": "NewBaseBdev", 00:11:31.463 "uuid": "9c7fee50-0134-477a-94bc-df05ff4e8450", 00:11:31.463 "is_configured": true, 00:11:31.463 "data_offset": 0, 00:11:31.463 "data_size": 65536 00:11:31.463 }, 00:11:31.463 { 00:11:31.463 "name": "BaseBdev2", 00:11:31.463 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:31.463 "is_configured": true, 00:11:31.463 "data_offset": 0, 00:11:31.463 "data_size": 65536 00:11:31.463 }, 00:11:31.463 { 00:11:31.463 "name": "BaseBdev3", 00:11:31.463 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:31.463 "is_configured": true, 00:11:31.464 "data_offset": 0, 00:11:31.464 "data_size": 65536 00:11:31.464 }, 00:11:31.464 { 00:11:31.464 "name": "BaseBdev4", 00:11:31.464 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:31.464 "is_configured": true, 00:11:31.464 "data_offset": 0, 00:11:31.464 "data_size": 65536 00:11:31.464 } 00:11:31.464 ] 00:11:31.464 }' 00:11:31.464 09:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.464 09:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 [2024-11-20 09:23:57.235920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.031 "name": "Existed_Raid", 00:11:32.031 "aliases": [ 00:11:32.031 "850cb99e-f977-4cd7-8e70-a76a286139a7" 00:11:32.031 ], 00:11:32.031 "product_name": "Raid Volume", 00:11:32.031 "block_size": 512, 00:11:32.031 "num_blocks": 262144, 00:11:32.031 "uuid": "850cb99e-f977-4cd7-8e70-a76a286139a7", 00:11:32.031 "assigned_rate_limits": { 00:11:32.031 "rw_ios_per_sec": 0, 00:11:32.031 "rw_mbytes_per_sec": 0, 00:11:32.031 "r_mbytes_per_sec": 0, 00:11:32.031 "w_mbytes_per_sec": 0 00:11:32.031 }, 00:11:32.031 "claimed": false, 00:11:32.031 "zoned": false, 00:11:32.031 "supported_io_types": { 00:11:32.031 "read": true, 00:11:32.031 "write": true, 00:11:32.031 "unmap": true, 00:11:32.031 "flush": true, 00:11:32.031 "reset": true, 00:11:32.031 "nvme_admin": false, 00:11:32.031 "nvme_io": false, 00:11:32.031 "nvme_io_md": false, 00:11:32.031 "write_zeroes": true, 00:11:32.031 "zcopy": false, 00:11:32.031 "get_zone_info": false, 00:11:32.031 "zone_management": false, 00:11:32.031 "zone_append": false, 00:11:32.031 "compare": false, 00:11:32.031 "compare_and_write": false, 00:11:32.031 "abort": false, 00:11:32.031 "seek_hole": false, 00:11:32.031 "seek_data": false, 00:11:32.031 "copy": false, 00:11:32.031 "nvme_iov_md": false 00:11:32.031 }, 00:11:32.031 "memory_domains": [ 00:11:32.031 { 00:11:32.031 "dma_device_id": "system", 00:11:32.031 "dma_device_type": 1 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.031 "dma_device_type": 2 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "dma_device_id": "system", 00:11:32.031 "dma_device_type": 1 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.031 "dma_device_type": 2 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "dma_device_id": "system", 00:11:32.031 "dma_device_type": 1 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.031 "dma_device_type": 2 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "dma_device_id": "system", 00:11:32.031 "dma_device_type": 1 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.031 "dma_device_type": 2 00:11:32.031 } 00:11:32.031 ], 00:11:32.031 "driver_specific": { 00:11:32.031 "raid": { 00:11:32.031 "uuid": "850cb99e-f977-4cd7-8e70-a76a286139a7", 00:11:32.031 "strip_size_kb": 64, 00:11:32.031 "state": "online", 00:11:32.031 "raid_level": "concat", 00:11:32.031 "superblock": false, 00:11:32.031 "num_base_bdevs": 4, 00:11:32.031 "num_base_bdevs_discovered": 4, 00:11:32.031 "num_base_bdevs_operational": 4, 00:11:32.031 "base_bdevs_list": [ 00:11:32.031 { 00:11:32.031 "name": "NewBaseBdev", 00:11:32.031 "uuid": "9c7fee50-0134-477a-94bc-df05ff4e8450", 00:11:32.031 "is_configured": true, 00:11:32.031 "data_offset": 0, 00:11:32.031 "data_size": 65536 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "name": "BaseBdev2", 00:11:32.031 "uuid": "81760aa0-9019-4d4a-bf66-aab2b5eba92d", 00:11:32.031 "is_configured": true, 00:11:32.031 "data_offset": 0, 00:11:32.031 "data_size": 65536 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "name": "BaseBdev3", 00:11:32.031 "uuid": "de654ff4-c7e3-46ea-b51b-bd83180cf22a", 00:11:32.031 "is_configured": true, 00:11:32.031 "data_offset": 0, 00:11:32.031 "data_size": 65536 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "name": "BaseBdev4", 00:11:32.031 "uuid": "4701a31d-ffe6-4fcf-9ed8-39845da59d9f", 00:11:32.031 "is_configured": true, 00:11:32.031 "data_offset": 0, 00:11:32.031 "data_size": 65536 00:11:32.031 } 00:11:32.031 ] 00:11:32.031 } 00:11:32.031 } 00:11:32.031 }' 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:32.031 BaseBdev2 00:11:32.031 BaseBdev3 00:11:32.031 BaseBdev4' 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.031 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.032 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:32.032 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.032 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.032 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.032 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.032 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.032 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.032 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.291 [2024-11-20 09:23:57.590870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.291 [2024-11-20 09:23:57.590912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.291 [2024-11-20 09:23:57.591014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.291 [2024-11-20 09:23:57.591092] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.291 [2024-11-20 09:23:57.591104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71627 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71627 ']' 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71627 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71627 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.291 killing process with pid 71627 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71627' 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71627 00:11:32.291 [2024-11-20 09:23:57.646581] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.291 09:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71627 00:11:32.874 [2024-11-20 09:23:58.114449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:34.249 00:11:34.249 real 0m12.355s 00:11:34.249 user 0m19.551s 00:11:34.249 sys 0m2.007s 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.249 ************************************ 00:11:34.249 END TEST raid_state_function_test 00:11:34.249 ************************************ 00:11:34.249 09:23:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:34.249 09:23:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:34.249 09:23:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.249 09:23:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.249 ************************************ 00:11:34.249 START TEST raid_state_function_test_sb 00:11:34.249 ************************************ 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.249 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72304 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:34.250 Process raid pid: 72304 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72304' 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72304 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72304 ']' 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.250 09:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.250 [2024-11-20 09:23:59.598291] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:11:34.250 [2024-11-20 09:23:59.598985] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.508 [2024-11-20 09:23:59.780968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.508 [2024-11-20 09:23:59.924271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.765 [2024-11-20 09:24:00.173122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.765 [2024-11-20 09:24:00.173176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.330 [2024-11-20 09:24:00.516846] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.330 [2024-11-20 09:24:00.516904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.330 [2024-11-20 09:24:00.516918] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.330 [2024-11-20 09:24:00.516930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.330 [2024-11-20 09:24:00.516938] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.330 [2024-11-20 09:24:00.516949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.330 [2024-11-20 09:24:00.516957] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.330 [2024-11-20 09:24:00.516967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.330 "name": "Existed_Raid", 00:11:35.330 "uuid": "1ef26245-9fef-4bd6-8e37-490b51192695", 00:11:35.330 "strip_size_kb": 64, 00:11:35.330 "state": "configuring", 00:11:35.330 "raid_level": "concat", 00:11:35.330 "superblock": true, 00:11:35.330 "num_base_bdevs": 4, 00:11:35.330 "num_base_bdevs_discovered": 0, 00:11:35.330 "num_base_bdevs_operational": 4, 00:11:35.330 "base_bdevs_list": [ 00:11:35.330 { 00:11:35.330 "name": "BaseBdev1", 00:11:35.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.330 "is_configured": false, 00:11:35.330 "data_offset": 0, 00:11:35.330 "data_size": 0 00:11:35.330 }, 00:11:35.330 { 00:11:35.330 "name": "BaseBdev2", 00:11:35.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.330 "is_configured": false, 00:11:35.330 "data_offset": 0, 00:11:35.330 "data_size": 0 00:11:35.330 }, 00:11:35.330 { 00:11:35.330 "name": "BaseBdev3", 00:11:35.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.330 "is_configured": false, 00:11:35.330 "data_offset": 0, 00:11:35.330 "data_size": 0 00:11:35.330 }, 00:11:35.330 { 00:11:35.330 "name": "BaseBdev4", 00:11:35.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.330 "is_configured": false, 00:11:35.330 "data_offset": 0, 00:11:35.330 "data_size": 0 00:11:35.330 } 00:11:35.330 ] 00:11:35.330 }' 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.330 09:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.588 [2024-11-20 09:24:01.016226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.588 [2024-11-20 09:24:01.016281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.588 [2024-11-20 09:24:01.028213] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.588 [2024-11-20 09:24:01.028263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.588 [2024-11-20 09:24:01.028274] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.588 [2024-11-20 09:24:01.028286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.588 [2024-11-20 09:24:01.028293] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.588 [2024-11-20 09:24:01.028304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.588 [2024-11-20 09:24:01.028312] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.588 [2024-11-20 09:24:01.028323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.588 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.846 [2024-11-20 09:24:01.082330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.846 BaseBdev1 00:11:35.846 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.846 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:35.846 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.847 [ 00:11:35.847 { 00:11:35.847 "name": "BaseBdev1", 00:11:35.847 "aliases": [ 00:11:35.847 "f34366ae-e2f3-4ebd-82a7-5289bf78986e" 00:11:35.847 ], 00:11:35.847 "product_name": "Malloc disk", 00:11:35.847 "block_size": 512, 00:11:35.847 "num_blocks": 65536, 00:11:35.847 "uuid": "f34366ae-e2f3-4ebd-82a7-5289bf78986e", 00:11:35.847 "assigned_rate_limits": { 00:11:35.847 "rw_ios_per_sec": 0, 00:11:35.847 "rw_mbytes_per_sec": 0, 00:11:35.847 "r_mbytes_per_sec": 0, 00:11:35.847 "w_mbytes_per_sec": 0 00:11:35.847 }, 00:11:35.847 "claimed": true, 00:11:35.847 "claim_type": "exclusive_write", 00:11:35.847 "zoned": false, 00:11:35.847 "supported_io_types": { 00:11:35.847 "read": true, 00:11:35.847 "write": true, 00:11:35.847 "unmap": true, 00:11:35.847 "flush": true, 00:11:35.847 "reset": true, 00:11:35.847 "nvme_admin": false, 00:11:35.847 "nvme_io": false, 00:11:35.847 "nvme_io_md": false, 00:11:35.847 "write_zeroes": true, 00:11:35.847 "zcopy": true, 00:11:35.847 "get_zone_info": false, 00:11:35.847 "zone_management": false, 00:11:35.847 "zone_append": false, 00:11:35.847 "compare": false, 00:11:35.847 "compare_and_write": false, 00:11:35.847 "abort": true, 00:11:35.847 "seek_hole": false, 00:11:35.847 "seek_data": false, 00:11:35.847 "copy": true, 00:11:35.847 "nvme_iov_md": false 00:11:35.847 }, 00:11:35.847 "memory_domains": [ 00:11:35.847 { 00:11:35.847 "dma_device_id": "system", 00:11:35.847 "dma_device_type": 1 00:11:35.847 }, 00:11:35.847 { 00:11:35.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.847 "dma_device_type": 2 00:11:35.847 } 00:11:35.847 ], 00:11:35.847 "driver_specific": {} 00:11:35.847 } 00:11:35.847 ] 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.847 "name": "Existed_Raid", 00:11:35.847 "uuid": "78dc9337-d551-416d-aa4e-16904197070b", 00:11:35.847 "strip_size_kb": 64, 00:11:35.847 "state": "configuring", 00:11:35.847 "raid_level": "concat", 00:11:35.847 "superblock": true, 00:11:35.847 "num_base_bdevs": 4, 00:11:35.847 "num_base_bdevs_discovered": 1, 00:11:35.847 "num_base_bdevs_operational": 4, 00:11:35.847 "base_bdevs_list": [ 00:11:35.847 { 00:11:35.847 "name": "BaseBdev1", 00:11:35.847 "uuid": "f34366ae-e2f3-4ebd-82a7-5289bf78986e", 00:11:35.847 "is_configured": true, 00:11:35.847 "data_offset": 2048, 00:11:35.847 "data_size": 63488 00:11:35.847 }, 00:11:35.847 { 00:11:35.847 "name": "BaseBdev2", 00:11:35.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.847 "is_configured": false, 00:11:35.847 "data_offset": 0, 00:11:35.847 "data_size": 0 00:11:35.847 }, 00:11:35.847 { 00:11:35.847 "name": "BaseBdev3", 00:11:35.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.847 "is_configured": false, 00:11:35.847 "data_offset": 0, 00:11:35.847 "data_size": 0 00:11:35.847 }, 00:11:35.847 { 00:11:35.847 "name": "BaseBdev4", 00:11:35.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.847 "is_configured": false, 00:11:35.847 "data_offset": 0, 00:11:35.847 "data_size": 0 00:11:35.847 } 00:11:35.847 ] 00:11:35.847 }' 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.847 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 [2024-11-20 09:24:01.605533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.414 [2024-11-20 09:24:01.605597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 [2024-11-20 09:24:01.613582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.414 [2024-11-20 09:24:01.615741] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.414 [2024-11-20 09:24:01.615784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.414 [2024-11-20 09:24:01.615795] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.414 [2024-11-20 09:24:01.615808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.414 [2024-11-20 09:24:01.615816] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:36.414 [2024-11-20 09:24:01.615826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.414 "name": "Existed_Raid", 00:11:36.414 "uuid": "7d33d4de-2a14-483c-be02-552bd3dbd0d3", 00:11:36.414 "strip_size_kb": 64, 00:11:36.414 "state": "configuring", 00:11:36.414 "raid_level": "concat", 00:11:36.414 "superblock": true, 00:11:36.414 "num_base_bdevs": 4, 00:11:36.414 "num_base_bdevs_discovered": 1, 00:11:36.414 "num_base_bdevs_operational": 4, 00:11:36.414 "base_bdevs_list": [ 00:11:36.414 { 00:11:36.414 "name": "BaseBdev1", 00:11:36.414 "uuid": "f34366ae-e2f3-4ebd-82a7-5289bf78986e", 00:11:36.414 "is_configured": true, 00:11:36.414 "data_offset": 2048, 00:11:36.414 "data_size": 63488 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "name": "BaseBdev2", 00:11:36.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.414 "is_configured": false, 00:11:36.414 "data_offset": 0, 00:11:36.414 "data_size": 0 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "name": "BaseBdev3", 00:11:36.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.414 "is_configured": false, 00:11:36.414 "data_offset": 0, 00:11:36.414 "data_size": 0 00:11:36.414 }, 00:11:36.414 { 00:11:36.414 "name": "BaseBdev4", 00:11:36.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.414 "is_configured": false, 00:11:36.414 "data_offset": 0, 00:11:36.414 "data_size": 0 00:11:36.414 } 00:11:36.414 ] 00:11:36.414 }' 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.414 09:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.672 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.672 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.672 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.930 [2024-11-20 09:24:02.145294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.930 BaseBdev2 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.930 [ 00:11:36.930 { 00:11:36.930 "name": "BaseBdev2", 00:11:36.930 "aliases": [ 00:11:36.930 "2841405b-aa76-496b-b5a8-3c784d130ec2" 00:11:36.930 ], 00:11:36.930 "product_name": "Malloc disk", 00:11:36.930 "block_size": 512, 00:11:36.930 "num_blocks": 65536, 00:11:36.930 "uuid": "2841405b-aa76-496b-b5a8-3c784d130ec2", 00:11:36.930 "assigned_rate_limits": { 00:11:36.930 "rw_ios_per_sec": 0, 00:11:36.930 "rw_mbytes_per_sec": 0, 00:11:36.930 "r_mbytes_per_sec": 0, 00:11:36.930 "w_mbytes_per_sec": 0 00:11:36.930 }, 00:11:36.930 "claimed": true, 00:11:36.930 "claim_type": "exclusive_write", 00:11:36.930 "zoned": false, 00:11:36.930 "supported_io_types": { 00:11:36.930 "read": true, 00:11:36.930 "write": true, 00:11:36.930 "unmap": true, 00:11:36.930 "flush": true, 00:11:36.930 "reset": true, 00:11:36.930 "nvme_admin": false, 00:11:36.930 "nvme_io": false, 00:11:36.930 "nvme_io_md": false, 00:11:36.930 "write_zeroes": true, 00:11:36.930 "zcopy": true, 00:11:36.930 "get_zone_info": false, 00:11:36.930 "zone_management": false, 00:11:36.930 "zone_append": false, 00:11:36.930 "compare": false, 00:11:36.930 "compare_and_write": false, 00:11:36.930 "abort": true, 00:11:36.930 "seek_hole": false, 00:11:36.930 "seek_data": false, 00:11:36.930 "copy": true, 00:11:36.930 "nvme_iov_md": false 00:11:36.930 }, 00:11:36.930 "memory_domains": [ 00:11:36.930 { 00:11:36.930 "dma_device_id": "system", 00:11:36.930 "dma_device_type": 1 00:11:36.930 }, 00:11:36.930 { 00:11:36.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.930 "dma_device_type": 2 00:11:36.930 } 00:11:36.930 ], 00:11:36.930 "driver_specific": {} 00:11:36.930 } 00:11:36.930 ] 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.930 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.931 "name": "Existed_Raid", 00:11:36.931 "uuid": "7d33d4de-2a14-483c-be02-552bd3dbd0d3", 00:11:36.931 "strip_size_kb": 64, 00:11:36.931 "state": "configuring", 00:11:36.931 "raid_level": "concat", 00:11:36.931 "superblock": true, 00:11:36.931 "num_base_bdevs": 4, 00:11:36.931 "num_base_bdevs_discovered": 2, 00:11:36.931 "num_base_bdevs_operational": 4, 00:11:36.931 "base_bdevs_list": [ 00:11:36.931 { 00:11:36.931 "name": "BaseBdev1", 00:11:36.931 "uuid": "f34366ae-e2f3-4ebd-82a7-5289bf78986e", 00:11:36.931 "is_configured": true, 00:11:36.931 "data_offset": 2048, 00:11:36.931 "data_size": 63488 00:11:36.931 }, 00:11:36.931 { 00:11:36.931 "name": "BaseBdev2", 00:11:36.931 "uuid": "2841405b-aa76-496b-b5a8-3c784d130ec2", 00:11:36.931 "is_configured": true, 00:11:36.931 "data_offset": 2048, 00:11:36.931 "data_size": 63488 00:11:36.931 }, 00:11:36.931 { 00:11:36.931 "name": "BaseBdev3", 00:11:36.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.931 "is_configured": false, 00:11:36.931 "data_offset": 0, 00:11:36.931 "data_size": 0 00:11:36.931 }, 00:11:36.931 { 00:11:36.931 "name": "BaseBdev4", 00:11:36.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.931 "is_configured": false, 00:11:36.931 "data_offset": 0, 00:11:36.931 "data_size": 0 00:11:36.931 } 00:11:36.931 ] 00:11:36.931 }' 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.931 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.189 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.189 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.189 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.449 [2024-11-20 09:24:02.697155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.449 BaseBdev3 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.449 [ 00:11:37.449 { 00:11:37.449 "name": "BaseBdev3", 00:11:37.449 "aliases": [ 00:11:37.449 "47119cab-b21d-4c9c-be15-17f0c863249e" 00:11:37.449 ], 00:11:37.449 "product_name": "Malloc disk", 00:11:37.449 "block_size": 512, 00:11:37.449 "num_blocks": 65536, 00:11:37.449 "uuid": "47119cab-b21d-4c9c-be15-17f0c863249e", 00:11:37.449 "assigned_rate_limits": { 00:11:37.449 "rw_ios_per_sec": 0, 00:11:37.449 "rw_mbytes_per_sec": 0, 00:11:37.449 "r_mbytes_per_sec": 0, 00:11:37.449 "w_mbytes_per_sec": 0 00:11:37.449 }, 00:11:37.449 "claimed": true, 00:11:37.449 "claim_type": "exclusive_write", 00:11:37.449 "zoned": false, 00:11:37.449 "supported_io_types": { 00:11:37.449 "read": true, 00:11:37.449 "write": true, 00:11:37.449 "unmap": true, 00:11:37.449 "flush": true, 00:11:37.449 "reset": true, 00:11:37.449 "nvme_admin": false, 00:11:37.449 "nvme_io": false, 00:11:37.449 "nvme_io_md": false, 00:11:37.449 "write_zeroes": true, 00:11:37.449 "zcopy": true, 00:11:37.449 "get_zone_info": false, 00:11:37.449 "zone_management": false, 00:11:37.449 "zone_append": false, 00:11:37.449 "compare": false, 00:11:37.449 "compare_and_write": false, 00:11:37.449 "abort": true, 00:11:37.449 "seek_hole": false, 00:11:37.449 "seek_data": false, 00:11:37.449 "copy": true, 00:11:37.449 "nvme_iov_md": false 00:11:37.449 }, 00:11:37.449 "memory_domains": [ 00:11:37.449 { 00:11:37.449 "dma_device_id": "system", 00:11:37.449 "dma_device_type": 1 00:11:37.449 }, 00:11:37.449 { 00:11:37.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.449 "dma_device_type": 2 00:11:37.449 } 00:11:37.449 ], 00:11:37.449 "driver_specific": {} 00:11:37.449 } 00:11:37.449 ] 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.449 "name": "Existed_Raid", 00:11:37.449 "uuid": "7d33d4de-2a14-483c-be02-552bd3dbd0d3", 00:11:37.449 "strip_size_kb": 64, 00:11:37.449 "state": "configuring", 00:11:37.449 "raid_level": "concat", 00:11:37.449 "superblock": true, 00:11:37.449 "num_base_bdevs": 4, 00:11:37.449 "num_base_bdevs_discovered": 3, 00:11:37.449 "num_base_bdevs_operational": 4, 00:11:37.449 "base_bdevs_list": [ 00:11:37.449 { 00:11:37.449 "name": "BaseBdev1", 00:11:37.449 "uuid": "f34366ae-e2f3-4ebd-82a7-5289bf78986e", 00:11:37.449 "is_configured": true, 00:11:37.449 "data_offset": 2048, 00:11:37.449 "data_size": 63488 00:11:37.449 }, 00:11:37.449 { 00:11:37.449 "name": "BaseBdev2", 00:11:37.449 "uuid": "2841405b-aa76-496b-b5a8-3c784d130ec2", 00:11:37.449 "is_configured": true, 00:11:37.449 "data_offset": 2048, 00:11:37.449 "data_size": 63488 00:11:37.449 }, 00:11:37.449 { 00:11:37.449 "name": "BaseBdev3", 00:11:37.449 "uuid": "47119cab-b21d-4c9c-be15-17f0c863249e", 00:11:37.449 "is_configured": true, 00:11:37.449 "data_offset": 2048, 00:11:37.449 "data_size": 63488 00:11:37.449 }, 00:11:37.449 { 00:11:37.449 "name": "BaseBdev4", 00:11:37.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.449 "is_configured": false, 00:11:37.449 "data_offset": 0, 00:11:37.449 "data_size": 0 00:11:37.449 } 00:11:37.449 ] 00:11:37.449 }' 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.449 09:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 [2024-11-20 09:24:03.258117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.018 [2024-11-20 09:24:03.258443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.018 [2024-11-20 09:24:03.258632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:38.018 [2024-11-20 09:24:03.259015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.018 BaseBdev4 00:11:38.018 [2024-11-20 09:24:03.259271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.018 [2024-11-20 09:24:03.259289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:38.018 [2024-11-20 09:24:03.259476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 [ 00:11:38.018 { 00:11:38.018 "name": "BaseBdev4", 00:11:38.018 "aliases": [ 00:11:38.018 "0f5c13c5-6bdd-4ead-9b4d-2a1fc057efdb" 00:11:38.018 ], 00:11:38.018 "product_name": "Malloc disk", 00:11:38.018 "block_size": 512, 00:11:38.018 "num_blocks": 65536, 00:11:38.018 "uuid": "0f5c13c5-6bdd-4ead-9b4d-2a1fc057efdb", 00:11:38.018 "assigned_rate_limits": { 00:11:38.018 "rw_ios_per_sec": 0, 00:11:38.018 "rw_mbytes_per_sec": 0, 00:11:38.018 "r_mbytes_per_sec": 0, 00:11:38.018 "w_mbytes_per_sec": 0 00:11:38.018 }, 00:11:38.018 "claimed": true, 00:11:38.018 "claim_type": "exclusive_write", 00:11:38.018 "zoned": false, 00:11:38.018 "supported_io_types": { 00:11:38.018 "read": true, 00:11:38.018 "write": true, 00:11:38.018 "unmap": true, 00:11:38.018 "flush": true, 00:11:38.018 "reset": true, 00:11:38.018 "nvme_admin": false, 00:11:38.018 "nvme_io": false, 00:11:38.018 "nvme_io_md": false, 00:11:38.018 "write_zeroes": true, 00:11:38.018 "zcopy": true, 00:11:38.018 "get_zone_info": false, 00:11:38.018 "zone_management": false, 00:11:38.018 "zone_append": false, 00:11:38.018 "compare": false, 00:11:38.018 "compare_and_write": false, 00:11:38.018 "abort": true, 00:11:38.018 "seek_hole": false, 00:11:38.018 "seek_data": false, 00:11:38.018 "copy": true, 00:11:38.018 "nvme_iov_md": false 00:11:38.018 }, 00:11:38.018 "memory_domains": [ 00:11:38.018 { 00:11:38.018 "dma_device_id": "system", 00:11:38.018 "dma_device_type": 1 00:11:38.018 }, 00:11:38.018 { 00:11:38.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.018 "dma_device_type": 2 00:11:38.018 } 00:11:38.018 ], 00:11:38.018 "driver_specific": {} 00:11:38.018 } 00:11:38.018 ] 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.018 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.019 "name": "Existed_Raid", 00:11:38.019 "uuid": "7d33d4de-2a14-483c-be02-552bd3dbd0d3", 00:11:38.019 "strip_size_kb": 64, 00:11:38.019 "state": "online", 00:11:38.019 "raid_level": "concat", 00:11:38.019 "superblock": true, 00:11:38.019 "num_base_bdevs": 4, 00:11:38.019 "num_base_bdevs_discovered": 4, 00:11:38.019 "num_base_bdevs_operational": 4, 00:11:38.019 "base_bdevs_list": [ 00:11:38.019 { 00:11:38.019 "name": "BaseBdev1", 00:11:38.019 "uuid": "f34366ae-e2f3-4ebd-82a7-5289bf78986e", 00:11:38.019 "is_configured": true, 00:11:38.019 "data_offset": 2048, 00:11:38.019 "data_size": 63488 00:11:38.019 }, 00:11:38.019 { 00:11:38.019 "name": "BaseBdev2", 00:11:38.019 "uuid": "2841405b-aa76-496b-b5a8-3c784d130ec2", 00:11:38.019 "is_configured": true, 00:11:38.019 "data_offset": 2048, 00:11:38.019 "data_size": 63488 00:11:38.019 }, 00:11:38.019 { 00:11:38.019 "name": "BaseBdev3", 00:11:38.019 "uuid": "47119cab-b21d-4c9c-be15-17f0c863249e", 00:11:38.019 "is_configured": true, 00:11:38.019 "data_offset": 2048, 00:11:38.019 "data_size": 63488 00:11:38.019 }, 00:11:38.019 { 00:11:38.019 "name": "BaseBdev4", 00:11:38.019 "uuid": "0f5c13c5-6bdd-4ead-9b4d-2a1fc057efdb", 00:11:38.019 "is_configured": true, 00:11:38.019 "data_offset": 2048, 00:11:38.019 "data_size": 63488 00:11:38.019 } 00:11:38.019 ] 00:11:38.019 }' 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.019 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.593 [2024-11-20 09:24:03.793756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.593 "name": "Existed_Raid", 00:11:38.593 "aliases": [ 00:11:38.593 "7d33d4de-2a14-483c-be02-552bd3dbd0d3" 00:11:38.593 ], 00:11:38.593 "product_name": "Raid Volume", 00:11:38.593 "block_size": 512, 00:11:38.593 "num_blocks": 253952, 00:11:38.593 "uuid": "7d33d4de-2a14-483c-be02-552bd3dbd0d3", 00:11:38.593 "assigned_rate_limits": { 00:11:38.593 "rw_ios_per_sec": 0, 00:11:38.593 "rw_mbytes_per_sec": 0, 00:11:38.593 "r_mbytes_per_sec": 0, 00:11:38.593 "w_mbytes_per_sec": 0 00:11:38.593 }, 00:11:38.593 "claimed": false, 00:11:38.593 "zoned": false, 00:11:38.593 "supported_io_types": { 00:11:38.593 "read": true, 00:11:38.593 "write": true, 00:11:38.593 "unmap": true, 00:11:38.593 "flush": true, 00:11:38.593 "reset": true, 00:11:38.593 "nvme_admin": false, 00:11:38.593 "nvme_io": false, 00:11:38.593 "nvme_io_md": false, 00:11:38.593 "write_zeroes": true, 00:11:38.593 "zcopy": false, 00:11:38.593 "get_zone_info": false, 00:11:38.593 "zone_management": false, 00:11:38.593 "zone_append": false, 00:11:38.593 "compare": false, 00:11:38.593 "compare_and_write": false, 00:11:38.593 "abort": false, 00:11:38.593 "seek_hole": false, 00:11:38.593 "seek_data": false, 00:11:38.593 "copy": false, 00:11:38.593 "nvme_iov_md": false 00:11:38.593 }, 00:11:38.593 "memory_domains": [ 00:11:38.593 { 00:11:38.593 "dma_device_id": "system", 00:11:38.593 "dma_device_type": 1 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.593 "dma_device_type": 2 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "dma_device_id": "system", 00:11:38.593 "dma_device_type": 1 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.593 "dma_device_type": 2 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "dma_device_id": "system", 00:11:38.593 "dma_device_type": 1 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.593 "dma_device_type": 2 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "dma_device_id": "system", 00:11:38.593 "dma_device_type": 1 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.593 "dma_device_type": 2 00:11:38.593 } 00:11:38.593 ], 00:11:38.593 "driver_specific": { 00:11:38.593 "raid": { 00:11:38.593 "uuid": "7d33d4de-2a14-483c-be02-552bd3dbd0d3", 00:11:38.593 "strip_size_kb": 64, 00:11:38.593 "state": "online", 00:11:38.593 "raid_level": "concat", 00:11:38.593 "superblock": true, 00:11:38.593 "num_base_bdevs": 4, 00:11:38.593 "num_base_bdevs_discovered": 4, 00:11:38.593 "num_base_bdevs_operational": 4, 00:11:38.593 "base_bdevs_list": [ 00:11:38.593 { 00:11:38.593 "name": "BaseBdev1", 00:11:38.593 "uuid": "f34366ae-e2f3-4ebd-82a7-5289bf78986e", 00:11:38.593 "is_configured": true, 00:11:38.593 "data_offset": 2048, 00:11:38.593 "data_size": 63488 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "name": "BaseBdev2", 00:11:38.593 "uuid": "2841405b-aa76-496b-b5a8-3c784d130ec2", 00:11:38.593 "is_configured": true, 00:11:38.593 "data_offset": 2048, 00:11:38.593 "data_size": 63488 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "name": "BaseBdev3", 00:11:38.593 "uuid": "47119cab-b21d-4c9c-be15-17f0c863249e", 00:11:38.593 "is_configured": true, 00:11:38.593 "data_offset": 2048, 00:11:38.593 "data_size": 63488 00:11:38.593 }, 00:11:38.593 { 00:11:38.593 "name": "BaseBdev4", 00:11:38.593 "uuid": "0f5c13c5-6bdd-4ead-9b4d-2a1fc057efdb", 00:11:38.593 "is_configured": true, 00:11:38.593 "data_offset": 2048, 00:11:38.593 "data_size": 63488 00:11:38.593 } 00:11:38.593 ] 00:11:38.593 } 00:11:38.593 } 00:11:38.593 }' 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:38.593 BaseBdev2 00:11:38.593 BaseBdev3 00:11:38.593 BaseBdev4' 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.593 09:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.593 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.593 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.593 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.593 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:38.593 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.593 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.593 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.593 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.852 [2024-11-20 09:24:04.108896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.852 [2024-11-20 09:24:04.109022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.852 [2024-11-20 09:24:04.109096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.852 "name": "Existed_Raid", 00:11:38.852 "uuid": "7d33d4de-2a14-483c-be02-552bd3dbd0d3", 00:11:38.852 "strip_size_kb": 64, 00:11:38.852 "state": "offline", 00:11:38.852 "raid_level": "concat", 00:11:38.852 "superblock": true, 00:11:38.852 "num_base_bdevs": 4, 00:11:38.852 "num_base_bdevs_discovered": 3, 00:11:38.852 "num_base_bdevs_operational": 3, 00:11:38.852 "base_bdevs_list": [ 00:11:38.852 { 00:11:38.852 "name": null, 00:11:38.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.852 "is_configured": false, 00:11:38.852 "data_offset": 0, 00:11:38.852 "data_size": 63488 00:11:38.852 }, 00:11:38.852 { 00:11:38.852 "name": "BaseBdev2", 00:11:38.852 "uuid": "2841405b-aa76-496b-b5a8-3c784d130ec2", 00:11:38.852 "is_configured": true, 00:11:38.852 "data_offset": 2048, 00:11:38.852 "data_size": 63488 00:11:38.852 }, 00:11:38.852 { 00:11:38.852 "name": "BaseBdev3", 00:11:38.852 "uuid": "47119cab-b21d-4c9c-be15-17f0c863249e", 00:11:38.852 "is_configured": true, 00:11:38.852 "data_offset": 2048, 00:11:38.852 "data_size": 63488 00:11:38.852 }, 00:11:38.852 { 00:11:38.852 "name": "BaseBdev4", 00:11:38.852 "uuid": "0f5c13c5-6bdd-4ead-9b4d-2a1fc057efdb", 00:11:38.852 "is_configured": true, 00:11:38.852 "data_offset": 2048, 00:11:38.852 "data_size": 63488 00:11:38.852 } 00:11:38.852 ] 00:11:38.852 }' 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.852 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.419 [2024-11-20 09:24:04.735570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.419 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.420 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.420 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.420 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.678 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.678 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.678 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.678 09:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:39.678 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.679 09:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.679 [2024-11-20 09:24:04.912029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.679 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.679 [2024-11-20 09:24:05.086249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:39.679 [2024-11-20 09:24:05.086382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.937 BaseBdev2 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.937 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.937 [ 00:11:39.937 { 00:11:39.937 "name": "BaseBdev2", 00:11:39.937 "aliases": [ 00:11:39.937 "1c7c789a-5c8b-4015-9fb5-52ab814b9251" 00:11:39.937 ], 00:11:39.937 "product_name": "Malloc disk", 00:11:39.937 "block_size": 512, 00:11:39.937 "num_blocks": 65536, 00:11:39.937 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:39.937 "assigned_rate_limits": { 00:11:39.937 "rw_ios_per_sec": 0, 00:11:39.937 "rw_mbytes_per_sec": 0, 00:11:39.937 "r_mbytes_per_sec": 0, 00:11:39.937 "w_mbytes_per_sec": 0 00:11:39.937 }, 00:11:39.937 "claimed": false, 00:11:39.938 "zoned": false, 00:11:39.938 "supported_io_types": { 00:11:39.938 "read": true, 00:11:39.938 "write": true, 00:11:39.938 "unmap": true, 00:11:39.938 "flush": true, 00:11:39.938 "reset": true, 00:11:39.938 "nvme_admin": false, 00:11:39.938 "nvme_io": false, 00:11:39.938 "nvme_io_md": false, 00:11:39.938 "write_zeroes": true, 00:11:39.938 "zcopy": true, 00:11:39.938 "get_zone_info": false, 00:11:39.938 "zone_management": false, 00:11:39.938 "zone_append": false, 00:11:39.938 "compare": false, 00:11:39.938 "compare_and_write": false, 00:11:39.938 "abort": true, 00:11:39.938 "seek_hole": false, 00:11:39.938 "seek_data": false, 00:11:39.938 "copy": true, 00:11:39.938 "nvme_iov_md": false 00:11:39.938 }, 00:11:39.938 "memory_domains": [ 00:11:39.938 { 00:11:39.938 "dma_device_id": "system", 00:11:39.938 "dma_device_type": 1 00:11:39.938 }, 00:11:39.938 { 00:11:39.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.938 "dma_device_type": 2 00:11:39.938 } 00:11:39.938 ], 00:11:39.938 "driver_specific": {} 00:11:39.938 } 00:11:39.938 ] 00:11:39.938 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.938 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.938 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:39.938 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.938 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:39.938 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.938 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.197 BaseBdev3 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.197 [ 00:11:40.197 { 00:11:40.197 "name": "BaseBdev3", 00:11:40.197 "aliases": [ 00:11:40.197 "03e88872-2abb-40fc-9169-c5a479175d04" 00:11:40.197 ], 00:11:40.197 "product_name": "Malloc disk", 00:11:40.197 "block_size": 512, 00:11:40.197 "num_blocks": 65536, 00:11:40.197 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:40.197 "assigned_rate_limits": { 00:11:40.197 "rw_ios_per_sec": 0, 00:11:40.197 "rw_mbytes_per_sec": 0, 00:11:40.197 "r_mbytes_per_sec": 0, 00:11:40.197 "w_mbytes_per_sec": 0 00:11:40.197 }, 00:11:40.197 "claimed": false, 00:11:40.197 "zoned": false, 00:11:40.197 "supported_io_types": { 00:11:40.197 "read": true, 00:11:40.197 "write": true, 00:11:40.197 "unmap": true, 00:11:40.197 "flush": true, 00:11:40.197 "reset": true, 00:11:40.197 "nvme_admin": false, 00:11:40.197 "nvme_io": false, 00:11:40.197 "nvme_io_md": false, 00:11:40.197 "write_zeroes": true, 00:11:40.197 "zcopy": true, 00:11:40.197 "get_zone_info": false, 00:11:40.197 "zone_management": false, 00:11:40.197 "zone_append": false, 00:11:40.197 "compare": false, 00:11:40.197 "compare_and_write": false, 00:11:40.197 "abort": true, 00:11:40.197 "seek_hole": false, 00:11:40.197 "seek_data": false, 00:11:40.197 "copy": true, 00:11:40.197 "nvme_iov_md": false 00:11:40.197 }, 00:11:40.197 "memory_domains": [ 00:11:40.197 { 00:11:40.197 "dma_device_id": "system", 00:11:40.197 "dma_device_type": 1 00:11:40.197 }, 00:11:40.197 { 00:11:40.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.197 "dma_device_type": 2 00:11:40.197 } 00:11:40.197 ], 00:11:40.197 "driver_specific": {} 00:11:40.197 } 00:11:40.197 ] 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.197 BaseBdev4 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.197 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.198 [ 00:11:40.198 { 00:11:40.198 "name": "BaseBdev4", 00:11:40.198 "aliases": [ 00:11:40.198 "6ea7ef23-dcec-41e3-8b24-55431aea6b45" 00:11:40.198 ], 00:11:40.198 "product_name": "Malloc disk", 00:11:40.198 "block_size": 512, 00:11:40.198 "num_blocks": 65536, 00:11:40.198 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:40.198 "assigned_rate_limits": { 00:11:40.198 "rw_ios_per_sec": 0, 00:11:40.198 "rw_mbytes_per_sec": 0, 00:11:40.198 "r_mbytes_per_sec": 0, 00:11:40.198 "w_mbytes_per_sec": 0 00:11:40.198 }, 00:11:40.198 "claimed": false, 00:11:40.198 "zoned": false, 00:11:40.198 "supported_io_types": { 00:11:40.198 "read": true, 00:11:40.198 "write": true, 00:11:40.198 "unmap": true, 00:11:40.198 "flush": true, 00:11:40.198 "reset": true, 00:11:40.198 "nvme_admin": false, 00:11:40.198 "nvme_io": false, 00:11:40.198 "nvme_io_md": false, 00:11:40.198 "write_zeroes": true, 00:11:40.198 "zcopy": true, 00:11:40.198 "get_zone_info": false, 00:11:40.198 "zone_management": false, 00:11:40.198 "zone_append": false, 00:11:40.198 "compare": false, 00:11:40.198 "compare_and_write": false, 00:11:40.198 "abort": true, 00:11:40.198 "seek_hole": false, 00:11:40.198 "seek_data": false, 00:11:40.198 "copy": true, 00:11:40.198 "nvme_iov_md": false 00:11:40.198 }, 00:11:40.198 "memory_domains": [ 00:11:40.198 { 00:11:40.198 "dma_device_id": "system", 00:11:40.198 "dma_device_type": 1 00:11:40.198 }, 00:11:40.198 { 00:11:40.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.198 "dma_device_type": 2 00:11:40.198 } 00:11:40.198 ], 00:11:40.198 "driver_specific": {} 00:11:40.198 } 00:11:40.198 ] 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.198 [2024-11-20 09:24:05.539538] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.198 [2024-11-20 09:24:05.539647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.198 [2024-11-20 09:24:05.539719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.198 [2024-11-20 09:24:05.541901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.198 [2024-11-20 09:24:05.542011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.198 "name": "Existed_Raid", 00:11:40.198 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:40.198 "strip_size_kb": 64, 00:11:40.198 "state": "configuring", 00:11:40.198 "raid_level": "concat", 00:11:40.198 "superblock": true, 00:11:40.198 "num_base_bdevs": 4, 00:11:40.198 "num_base_bdevs_discovered": 3, 00:11:40.198 "num_base_bdevs_operational": 4, 00:11:40.198 "base_bdevs_list": [ 00:11:40.198 { 00:11:40.198 "name": "BaseBdev1", 00:11:40.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.198 "is_configured": false, 00:11:40.198 "data_offset": 0, 00:11:40.198 "data_size": 0 00:11:40.198 }, 00:11:40.198 { 00:11:40.198 "name": "BaseBdev2", 00:11:40.198 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:40.198 "is_configured": true, 00:11:40.198 "data_offset": 2048, 00:11:40.198 "data_size": 63488 00:11:40.198 }, 00:11:40.198 { 00:11:40.198 "name": "BaseBdev3", 00:11:40.198 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:40.198 "is_configured": true, 00:11:40.198 "data_offset": 2048, 00:11:40.198 "data_size": 63488 00:11:40.198 }, 00:11:40.198 { 00:11:40.198 "name": "BaseBdev4", 00:11:40.198 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:40.198 "is_configured": true, 00:11:40.198 "data_offset": 2048, 00:11:40.198 "data_size": 63488 00:11:40.198 } 00:11:40.198 ] 00:11:40.198 }' 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.198 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.766 [2024-11-20 09:24:05.934829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.766 "name": "Existed_Raid", 00:11:40.766 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:40.766 "strip_size_kb": 64, 00:11:40.766 "state": "configuring", 00:11:40.766 "raid_level": "concat", 00:11:40.766 "superblock": true, 00:11:40.766 "num_base_bdevs": 4, 00:11:40.766 "num_base_bdevs_discovered": 2, 00:11:40.766 "num_base_bdevs_operational": 4, 00:11:40.766 "base_bdevs_list": [ 00:11:40.766 { 00:11:40.766 "name": "BaseBdev1", 00:11:40.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.766 "is_configured": false, 00:11:40.766 "data_offset": 0, 00:11:40.766 "data_size": 0 00:11:40.766 }, 00:11:40.766 { 00:11:40.766 "name": null, 00:11:40.766 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:40.766 "is_configured": false, 00:11:40.766 "data_offset": 0, 00:11:40.766 "data_size": 63488 00:11:40.766 }, 00:11:40.766 { 00:11:40.766 "name": "BaseBdev3", 00:11:40.766 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:40.766 "is_configured": true, 00:11:40.766 "data_offset": 2048, 00:11:40.766 "data_size": 63488 00:11:40.766 }, 00:11:40.766 { 00:11:40.766 "name": "BaseBdev4", 00:11:40.766 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:40.766 "is_configured": true, 00:11:40.766 "data_offset": 2048, 00:11:40.766 "data_size": 63488 00:11:40.766 } 00:11:40.766 ] 00:11:40.766 }' 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.766 09:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.025 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.025 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.025 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.025 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.025 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.283 [2024-11-20 09:24:06.528543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.283 BaseBdev1 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.283 [ 00:11:41.283 { 00:11:41.283 "name": "BaseBdev1", 00:11:41.283 "aliases": [ 00:11:41.283 "f40d87e1-e139-456d-abab-7932a7b34439" 00:11:41.283 ], 00:11:41.283 "product_name": "Malloc disk", 00:11:41.283 "block_size": 512, 00:11:41.283 "num_blocks": 65536, 00:11:41.283 "uuid": "f40d87e1-e139-456d-abab-7932a7b34439", 00:11:41.283 "assigned_rate_limits": { 00:11:41.283 "rw_ios_per_sec": 0, 00:11:41.283 "rw_mbytes_per_sec": 0, 00:11:41.283 "r_mbytes_per_sec": 0, 00:11:41.283 "w_mbytes_per_sec": 0 00:11:41.283 }, 00:11:41.283 "claimed": true, 00:11:41.283 "claim_type": "exclusive_write", 00:11:41.283 "zoned": false, 00:11:41.283 "supported_io_types": { 00:11:41.283 "read": true, 00:11:41.283 "write": true, 00:11:41.283 "unmap": true, 00:11:41.283 "flush": true, 00:11:41.283 "reset": true, 00:11:41.283 "nvme_admin": false, 00:11:41.283 "nvme_io": false, 00:11:41.283 "nvme_io_md": false, 00:11:41.283 "write_zeroes": true, 00:11:41.283 "zcopy": true, 00:11:41.283 "get_zone_info": false, 00:11:41.283 "zone_management": false, 00:11:41.283 "zone_append": false, 00:11:41.283 "compare": false, 00:11:41.283 "compare_and_write": false, 00:11:41.283 "abort": true, 00:11:41.283 "seek_hole": false, 00:11:41.283 "seek_data": false, 00:11:41.283 "copy": true, 00:11:41.283 "nvme_iov_md": false 00:11:41.283 }, 00:11:41.283 "memory_domains": [ 00:11:41.283 { 00:11:41.283 "dma_device_id": "system", 00:11:41.283 "dma_device_type": 1 00:11:41.283 }, 00:11:41.283 { 00:11:41.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.283 "dma_device_type": 2 00:11:41.283 } 00:11:41.283 ], 00:11:41.283 "driver_specific": {} 00:11:41.283 } 00:11:41.283 ] 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.283 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.284 "name": "Existed_Raid", 00:11:41.284 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:41.284 "strip_size_kb": 64, 00:11:41.284 "state": "configuring", 00:11:41.284 "raid_level": "concat", 00:11:41.284 "superblock": true, 00:11:41.284 "num_base_bdevs": 4, 00:11:41.284 "num_base_bdevs_discovered": 3, 00:11:41.284 "num_base_bdevs_operational": 4, 00:11:41.284 "base_bdevs_list": [ 00:11:41.284 { 00:11:41.284 "name": "BaseBdev1", 00:11:41.284 "uuid": "f40d87e1-e139-456d-abab-7932a7b34439", 00:11:41.284 "is_configured": true, 00:11:41.284 "data_offset": 2048, 00:11:41.284 "data_size": 63488 00:11:41.284 }, 00:11:41.284 { 00:11:41.284 "name": null, 00:11:41.284 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:41.284 "is_configured": false, 00:11:41.284 "data_offset": 0, 00:11:41.284 "data_size": 63488 00:11:41.284 }, 00:11:41.284 { 00:11:41.284 "name": "BaseBdev3", 00:11:41.284 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:41.284 "is_configured": true, 00:11:41.284 "data_offset": 2048, 00:11:41.284 "data_size": 63488 00:11:41.284 }, 00:11:41.284 { 00:11:41.284 "name": "BaseBdev4", 00:11:41.284 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:41.284 "is_configured": true, 00:11:41.284 "data_offset": 2048, 00:11:41.284 "data_size": 63488 00:11:41.284 } 00:11:41.284 ] 00:11:41.284 }' 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.284 09:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.852 [2024-11-20 09:24:07.063893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.852 "name": "Existed_Raid", 00:11:41.852 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:41.852 "strip_size_kb": 64, 00:11:41.852 "state": "configuring", 00:11:41.852 "raid_level": "concat", 00:11:41.852 "superblock": true, 00:11:41.852 "num_base_bdevs": 4, 00:11:41.852 "num_base_bdevs_discovered": 2, 00:11:41.852 "num_base_bdevs_operational": 4, 00:11:41.852 "base_bdevs_list": [ 00:11:41.852 { 00:11:41.852 "name": "BaseBdev1", 00:11:41.852 "uuid": "f40d87e1-e139-456d-abab-7932a7b34439", 00:11:41.852 "is_configured": true, 00:11:41.852 "data_offset": 2048, 00:11:41.852 "data_size": 63488 00:11:41.852 }, 00:11:41.852 { 00:11:41.852 "name": null, 00:11:41.852 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:41.852 "is_configured": false, 00:11:41.852 "data_offset": 0, 00:11:41.852 "data_size": 63488 00:11:41.852 }, 00:11:41.852 { 00:11:41.852 "name": null, 00:11:41.852 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:41.852 "is_configured": false, 00:11:41.852 "data_offset": 0, 00:11:41.852 "data_size": 63488 00:11:41.852 }, 00:11:41.852 { 00:11:41.852 "name": "BaseBdev4", 00:11:41.852 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:41.852 "is_configured": true, 00:11:41.852 "data_offset": 2048, 00:11:41.852 "data_size": 63488 00:11:41.852 } 00:11:41.852 ] 00:11:41.852 }' 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.852 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.111 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.111 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.111 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.111 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.111 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.370 [2024-11-20 09:24:07.599142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.370 "name": "Existed_Raid", 00:11:42.370 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:42.370 "strip_size_kb": 64, 00:11:42.370 "state": "configuring", 00:11:42.370 "raid_level": "concat", 00:11:42.370 "superblock": true, 00:11:42.370 "num_base_bdevs": 4, 00:11:42.370 "num_base_bdevs_discovered": 3, 00:11:42.370 "num_base_bdevs_operational": 4, 00:11:42.370 "base_bdevs_list": [ 00:11:42.370 { 00:11:42.370 "name": "BaseBdev1", 00:11:42.370 "uuid": "f40d87e1-e139-456d-abab-7932a7b34439", 00:11:42.370 "is_configured": true, 00:11:42.370 "data_offset": 2048, 00:11:42.370 "data_size": 63488 00:11:42.370 }, 00:11:42.370 { 00:11:42.370 "name": null, 00:11:42.370 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:42.370 "is_configured": false, 00:11:42.370 "data_offset": 0, 00:11:42.370 "data_size": 63488 00:11:42.370 }, 00:11:42.370 { 00:11:42.370 "name": "BaseBdev3", 00:11:42.370 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:42.370 "is_configured": true, 00:11:42.370 "data_offset": 2048, 00:11:42.370 "data_size": 63488 00:11:42.370 }, 00:11:42.370 { 00:11:42.370 "name": "BaseBdev4", 00:11:42.370 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:42.370 "is_configured": true, 00:11:42.370 "data_offset": 2048, 00:11:42.370 "data_size": 63488 00:11:42.370 } 00:11:42.370 ] 00:11:42.370 }' 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.370 09:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.937 [2024-11-20 09:24:08.146365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.937 "name": "Existed_Raid", 00:11:42.937 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:42.937 "strip_size_kb": 64, 00:11:42.937 "state": "configuring", 00:11:42.937 "raid_level": "concat", 00:11:42.937 "superblock": true, 00:11:42.937 "num_base_bdevs": 4, 00:11:42.937 "num_base_bdevs_discovered": 2, 00:11:42.937 "num_base_bdevs_operational": 4, 00:11:42.937 "base_bdevs_list": [ 00:11:42.937 { 00:11:42.937 "name": null, 00:11:42.937 "uuid": "f40d87e1-e139-456d-abab-7932a7b34439", 00:11:42.937 "is_configured": false, 00:11:42.937 "data_offset": 0, 00:11:42.937 "data_size": 63488 00:11:42.937 }, 00:11:42.937 { 00:11:42.937 "name": null, 00:11:42.937 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:42.937 "is_configured": false, 00:11:42.937 "data_offset": 0, 00:11:42.937 "data_size": 63488 00:11:42.937 }, 00:11:42.937 { 00:11:42.937 "name": "BaseBdev3", 00:11:42.937 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:42.937 "is_configured": true, 00:11:42.937 "data_offset": 2048, 00:11:42.937 "data_size": 63488 00:11:42.937 }, 00:11:42.937 { 00:11:42.937 "name": "BaseBdev4", 00:11:42.937 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:42.937 "is_configured": true, 00:11:42.937 "data_offset": 2048, 00:11:42.937 "data_size": 63488 00:11:42.937 } 00:11:42.937 ] 00:11:42.937 }' 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.937 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.504 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.504 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.504 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.504 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:43.504 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.504 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:43.504 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:43.504 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.505 [2024-11-20 09:24:08.788840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.505 "name": "Existed_Raid", 00:11:43.505 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:43.505 "strip_size_kb": 64, 00:11:43.505 "state": "configuring", 00:11:43.505 "raid_level": "concat", 00:11:43.505 "superblock": true, 00:11:43.505 "num_base_bdevs": 4, 00:11:43.505 "num_base_bdevs_discovered": 3, 00:11:43.505 "num_base_bdevs_operational": 4, 00:11:43.505 "base_bdevs_list": [ 00:11:43.505 { 00:11:43.505 "name": null, 00:11:43.505 "uuid": "f40d87e1-e139-456d-abab-7932a7b34439", 00:11:43.505 "is_configured": false, 00:11:43.505 "data_offset": 0, 00:11:43.505 "data_size": 63488 00:11:43.505 }, 00:11:43.505 { 00:11:43.505 "name": "BaseBdev2", 00:11:43.505 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:43.505 "is_configured": true, 00:11:43.505 "data_offset": 2048, 00:11:43.505 "data_size": 63488 00:11:43.505 }, 00:11:43.505 { 00:11:43.505 "name": "BaseBdev3", 00:11:43.505 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:43.505 "is_configured": true, 00:11:43.505 "data_offset": 2048, 00:11:43.505 "data_size": 63488 00:11:43.505 }, 00:11:43.505 { 00:11:43.505 "name": "BaseBdev4", 00:11:43.505 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:43.505 "is_configured": true, 00:11:43.505 "data_offset": 2048, 00:11:43.505 "data_size": 63488 00:11:43.505 } 00:11:43.505 ] 00:11:43.505 }' 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.505 09:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f40d87e1-e139-456d-abab-7932a7b34439 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.073 [2024-11-20 09:24:09.425967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.073 [2024-11-20 09:24:09.426258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:44.073 [2024-11-20 09:24:09.426272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:44.073 [2024-11-20 09:24:09.426593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:44.073 NewBaseBdev 00:11:44.073 [2024-11-20 09:24:09.426762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:44.073 [2024-11-20 09:24:09.426784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:44.073 [2024-11-20 09:24:09.426959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.073 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.073 [ 00:11:44.073 { 00:11:44.073 "name": "NewBaseBdev", 00:11:44.073 "aliases": [ 00:11:44.073 "f40d87e1-e139-456d-abab-7932a7b34439" 00:11:44.073 ], 00:11:44.073 "product_name": "Malloc disk", 00:11:44.073 "block_size": 512, 00:11:44.073 "num_blocks": 65536, 00:11:44.073 "uuid": "f40d87e1-e139-456d-abab-7932a7b34439", 00:11:44.073 "assigned_rate_limits": { 00:11:44.073 "rw_ios_per_sec": 0, 00:11:44.073 "rw_mbytes_per_sec": 0, 00:11:44.073 "r_mbytes_per_sec": 0, 00:11:44.073 "w_mbytes_per_sec": 0 00:11:44.073 }, 00:11:44.073 "claimed": true, 00:11:44.073 "claim_type": "exclusive_write", 00:11:44.073 "zoned": false, 00:11:44.073 "supported_io_types": { 00:11:44.073 "read": true, 00:11:44.073 "write": true, 00:11:44.073 "unmap": true, 00:11:44.073 "flush": true, 00:11:44.073 "reset": true, 00:11:44.073 "nvme_admin": false, 00:11:44.073 "nvme_io": false, 00:11:44.073 "nvme_io_md": false, 00:11:44.073 "write_zeroes": true, 00:11:44.073 "zcopy": true, 00:11:44.073 "get_zone_info": false, 00:11:44.073 "zone_management": false, 00:11:44.073 "zone_append": false, 00:11:44.073 "compare": false, 00:11:44.073 "compare_and_write": false, 00:11:44.074 "abort": true, 00:11:44.074 "seek_hole": false, 00:11:44.074 "seek_data": false, 00:11:44.074 "copy": true, 00:11:44.074 "nvme_iov_md": false 00:11:44.074 }, 00:11:44.074 "memory_domains": [ 00:11:44.074 { 00:11:44.074 "dma_device_id": "system", 00:11:44.074 "dma_device_type": 1 00:11:44.074 }, 00:11:44.074 { 00:11:44.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.074 "dma_device_type": 2 00:11:44.074 } 00:11:44.074 ], 00:11:44.074 "driver_specific": {} 00:11:44.074 } 00:11:44.074 ] 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.074 "name": "Existed_Raid", 00:11:44.074 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:44.074 "strip_size_kb": 64, 00:11:44.074 "state": "online", 00:11:44.074 "raid_level": "concat", 00:11:44.074 "superblock": true, 00:11:44.074 "num_base_bdevs": 4, 00:11:44.074 "num_base_bdevs_discovered": 4, 00:11:44.074 "num_base_bdevs_operational": 4, 00:11:44.074 "base_bdevs_list": [ 00:11:44.074 { 00:11:44.074 "name": "NewBaseBdev", 00:11:44.074 "uuid": "f40d87e1-e139-456d-abab-7932a7b34439", 00:11:44.074 "is_configured": true, 00:11:44.074 "data_offset": 2048, 00:11:44.074 "data_size": 63488 00:11:44.074 }, 00:11:44.074 { 00:11:44.074 "name": "BaseBdev2", 00:11:44.074 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:44.074 "is_configured": true, 00:11:44.074 "data_offset": 2048, 00:11:44.074 "data_size": 63488 00:11:44.074 }, 00:11:44.074 { 00:11:44.074 "name": "BaseBdev3", 00:11:44.074 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:44.074 "is_configured": true, 00:11:44.074 "data_offset": 2048, 00:11:44.074 "data_size": 63488 00:11:44.074 }, 00:11:44.074 { 00:11:44.074 "name": "BaseBdev4", 00:11:44.074 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:44.074 "is_configured": true, 00:11:44.074 "data_offset": 2048, 00:11:44.074 "data_size": 63488 00:11:44.074 } 00:11:44.074 ] 00:11:44.074 }' 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.074 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.642 [2024-11-20 09:24:09.937629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.642 "name": "Existed_Raid", 00:11:44.642 "aliases": [ 00:11:44.642 "ecb0202f-f268-4a98-be84-cfc97f1f31af" 00:11:44.642 ], 00:11:44.642 "product_name": "Raid Volume", 00:11:44.642 "block_size": 512, 00:11:44.642 "num_blocks": 253952, 00:11:44.642 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:44.642 "assigned_rate_limits": { 00:11:44.642 "rw_ios_per_sec": 0, 00:11:44.642 "rw_mbytes_per_sec": 0, 00:11:44.642 "r_mbytes_per_sec": 0, 00:11:44.642 "w_mbytes_per_sec": 0 00:11:44.642 }, 00:11:44.642 "claimed": false, 00:11:44.642 "zoned": false, 00:11:44.642 "supported_io_types": { 00:11:44.642 "read": true, 00:11:44.642 "write": true, 00:11:44.642 "unmap": true, 00:11:44.642 "flush": true, 00:11:44.642 "reset": true, 00:11:44.642 "nvme_admin": false, 00:11:44.642 "nvme_io": false, 00:11:44.642 "nvme_io_md": false, 00:11:44.642 "write_zeroes": true, 00:11:44.642 "zcopy": false, 00:11:44.642 "get_zone_info": false, 00:11:44.642 "zone_management": false, 00:11:44.642 "zone_append": false, 00:11:44.642 "compare": false, 00:11:44.642 "compare_and_write": false, 00:11:44.642 "abort": false, 00:11:44.642 "seek_hole": false, 00:11:44.642 "seek_data": false, 00:11:44.642 "copy": false, 00:11:44.642 "nvme_iov_md": false 00:11:44.642 }, 00:11:44.642 "memory_domains": [ 00:11:44.642 { 00:11:44.642 "dma_device_id": "system", 00:11:44.642 "dma_device_type": 1 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.642 "dma_device_type": 2 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "dma_device_id": "system", 00:11:44.642 "dma_device_type": 1 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.642 "dma_device_type": 2 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "dma_device_id": "system", 00:11:44.642 "dma_device_type": 1 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.642 "dma_device_type": 2 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "dma_device_id": "system", 00:11:44.642 "dma_device_type": 1 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.642 "dma_device_type": 2 00:11:44.642 } 00:11:44.642 ], 00:11:44.642 "driver_specific": { 00:11:44.642 "raid": { 00:11:44.642 "uuid": "ecb0202f-f268-4a98-be84-cfc97f1f31af", 00:11:44.642 "strip_size_kb": 64, 00:11:44.642 "state": "online", 00:11:44.642 "raid_level": "concat", 00:11:44.642 "superblock": true, 00:11:44.642 "num_base_bdevs": 4, 00:11:44.642 "num_base_bdevs_discovered": 4, 00:11:44.642 "num_base_bdevs_operational": 4, 00:11:44.642 "base_bdevs_list": [ 00:11:44.642 { 00:11:44.642 "name": "NewBaseBdev", 00:11:44.642 "uuid": "f40d87e1-e139-456d-abab-7932a7b34439", 00:11:44.642 "is_configured": true, 00:11:44.642 "data_offset": 2048, 00:11:44.642 "data_size": 63488 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "name": "BaseBdev2", 00:11:44.642 "uuid": "1c7c789a-5c8b-4015-9fb5-52ab814b9251", 00:11:44.642 "is_configured": true, 00:11:44.642 "data_offset": 2048, 00:11:44.642 "data_size": 63488 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "name": "BaseBdev3", 00:11:44.642 "uuid": "03e88872-2abb-40fc-9169-c5a479175d04", 00:11:44.642 "is_configured": true, 00:11:44.642 "data_offset": 2048, 00:11:44.642 "data_size": 63488 00:11:44.642 }, 00:11:44.642 { 00:11:44.642 "name": "BaseBdev4", 00:11:44.642 "uuid": "6ea7ef23-dcec-41e3-8b24-55431aea6b45", 00:11:44.642 "is_configured": true, 00:11:44.642 "data_offset": 2048, 00:11:44.642 "data_size": 63488 00:11:44.642 } 00:11:44.642 ] 00:11:44.642 } 00:11:44.642 } 00:11:44.642 }' 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:44.642 BaseBdev2 00:11:44.642 BaseBdev3 00:11:44.642 BaseBdev4' 00:11:44.642 09:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.642 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.642 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.642 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.643 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.901 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.902 [2024-11-20 09:24:10.220725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.902 [2024-11-20 09:24:10.220761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.902 [2024-11-20 09:24:10.220855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.902 [2024-11-20 09:24:10.220937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.902 [2024-11-20 09:24:10.220949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72304 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72304 ']' 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72304 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72304 00:11:44.902 killing process with pid 72304 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72304' 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72304 00:11:44.902 [2024-11-20 09:24:10.270255] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.902 09:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72304 00:11:45.501 [2024-11-20 09:24:10.754341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.884 09:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:46.884 00:11:46.884 real 0m12.516s 00:11:46.884 user 0m19.779s 00:11:46.884 sys 0m2.217s 00:11:46.884 09:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.884 ************************************ 00:11:46.884 END TEST raid_state_function_test_sb 00:11:46.884 ************************************ 00:11:46.884 09:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.884 09:24:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:46.884 09:24:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.884 09:24:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.884 09:24:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.884 ************************************ 00:11:46.884 START TEST raid_superblock_test 00:11:46.884 ************************************ 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72981 00:11:46.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72981 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72981 ']' 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.884 09:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.884 [2024-11-20 09:24:12.164843] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:11:46.884 [2024-11-20 09:24:12.165069] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72981 ] 00:11:47.142 [2024-11-20 09:24:12.344877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.142 [2024-11-20 09:24:12.475099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.400 [2024-11-20 09:24:12.689957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.400 [2024-11-20 09:24:12.689998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.967 malloc1 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.967 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.967 [2024-11-20 09:24:13.167482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.967 [2024-11-20 09:24:13.167639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.967 [2024-11-20 09:24:13.167715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:47.967 [2024-11-20 09:24:13.167755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.967 [2024-11-20 09:24:13.170157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.967 [2024-11-20 09:24:13.170242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.967 pt1 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.968 malloc2 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.968 [2024-11-20 09:24:13.230435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.968 [2024-11-20 09:24:13.230656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.968 [2024-11-20 09:24:13.230693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:47.968 [2024-11-20 09:24:13.230706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.968 [2024-11-20 09:24:13.233210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.968 [2024-11-20 09:24:13.233251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.968 pt2 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.968 malloc3 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.968 [2024-11-20 09:24:13.303265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.968 [2024-11-20 09:24:13.303399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.968 [2024-11-20 09:24:13.303453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:47.968 [2024-11-20 09:24:13.303488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.968 [2024-11-20 09:24:13.305919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.968 [2024-11-20 09:24:13.306003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.968 pt3 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.968 malloc4 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.968 [2024-11-20 09:24:13.363921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.968 [2024-11-20 09:24:13.364042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.968 [2024-11-20 09:24:13.364087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:47.968 [2024-11-20 09:24:13.364135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.968 [2024-11-20 09:24:13.366616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.968 [2024-11-20 09:24:13.366702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.968 pt4 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.968 [2024-11-20 09:24:13.375966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.968 [2024-11-20 09:24:13.378181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.968 [2024-11-20 09:24:13.378347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.968 [2024-11-20 09:24:13.378466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.968 [2024-11-20 09:24:13.378722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:47.968 [2024-11-20 09:24:13.378748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:47.968 [2024-11-20 09:24:13.379066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:47.968 [2024-11-20 09:24:13.379269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:47.968 [2024-11-20 09:24:13.379283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:47.968 [2024-11-20 09:24:13.379493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.968 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.227 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.227 "name": "raid_bdev1", 00:11:48.227 "uuid": "bf81fef9-8719-4c53-972c-7e0d2c2a50a5", 00:11:48.227 "strip_size_kb": 64, 00:11:48.227 "state": "online", 00:11:48.227 "raid_level": "concat", 00:11:48.227 "superblock": true, 00:11:48.227 "num_base_bdevs": 4, 00:11:48.227 "num_base_bdevs_discovered": 4, 00:11:48.227 "num_base_bdevs_operational": 4, 00:11:48.227 "base_bdevs_list": [ 00:11:48.227 { 00:11:48.227 "name": "pt1", 00:11:48.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.227 "is_configured": true, 00:11:48.227 "data_offset": 2048, 00:11:48.227 "data_size": 63488 00:11:48.227 }, 00:11:48.227 { 00:11:48.227 "name": "pt2", 00:11:48.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.227 "is_configured": true, 00:11:48.227 "data_offset": 2048, 00:11:48.227 "data_size": 63488 00:11:48.227 }, 00:11:48.227 { 00:11:48.227 "name": "pt3", 00:11:48.227 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.227 "is_configured": true, 00:11:48.227 "data_offset": 2048, 00:11:48.227 "data_size": 63488 00:11:48.227 }, 00:11:48.227 { 00:11:48.227 "name": "pt4", 00:11:48.227 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.227 "is_configured": true, 00:11:48.227 "data_offset": 2048, 00:11:48.227 "data_size": 63488 00:11:48.227 } 00:11:48.227 ] 00:11:48.227 }' 00:11:48.227 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.227 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.487 [2024-11-20 09:24:13.875498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.487 "name": "raid_bdev1", 00:11:48.487 "aliases": [ 00:11:48.487 "bf81fef9-8719-4c53-972c-7e0d2c2a50a5" 00:11:48.487 ], 00:11:48.487 "product_name": "Raid Volume", 00:11:48.487 "block_size": 512, 00:11:48.487 "num_blocks": 253952, 00:11:48.487 "uuid": "bf81fef9-8719-4c53-972c-7e0d2c2a50a5", 00:11:48.487 "assigned_rate_limits": { 00:11:48.487 "rw_ios_per_sec": 0, 00:11:48.487 "rw_mbytes_per_sec": 0, 00:11:48.487 "r_mbytes_per_sec": 0, 00:11:48.487 "w_mbytes_per_sec": 0 00:11:48.487 }, 00:11:48.487 "claimed": false, 00:11:48.487 "zoned": false, 00:11:48.487 "supported_io_types": { 00:11:48.487 "read": true, 00:11:48.487 "write": true, 00:11:48.487 "unmap": true, 00:11:48.487 "flush": true, 00:11:48.487 "reset": true, 00:11:48.487 "nvme_admin": false, 00:11:48.487 "nvme_io": false, 00:11:48.487 "nvme_io_md": false, 00:11:48.487 "write_zeroes": true, 00:11:48.487 "zcopy": false, 00:11:48.487 "get_zone_info": false, 00:11:48.487 "zone_management": false, 00:11:48.487 "zone_append": false, 00:11:48.487 "compare": false, 00:11:48.487 "compare_and_write": false, 00:11:48.487 "abort": false, 00:11:48.487 "seek_hole": false, 00:11:48.487 "seek_data": false, 00:11:48.487 "copy": false, 00:11:48.487 "nvme_iov_md": false 00:11:48.487 }, 00:11:48.487 "memory_domains": [ 00:11:48.487 { 00:11:48.487 "dma_device_id": "system", 00:11:48.487 "dma_device_type": 1 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.487 "dma_device_type": 2 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "dma_device_id": "system", 00:11:48.487 "dma_device_type": 1 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.487 "dma_device_type": 2 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "dma_device_id": "system", 00:11:48.487 "dma_device_type": 1 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.487 "dma_device_type": 2 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "dma_device_id": "system", 00:11:48.487 "dma_device_type": 1 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.487 "dma_device_type": 2 00:11:48.487 } 00:11:48.487 ], 00:11:48.487 "driver_specific": { 00:11:48.487 "raid": { 00:11:48.487 "uuid": "bf81fef9-8719-4c53-972c-7e0d2c2a50a5", 00:11:48.487 "strip_size_kb": 64, 00:11:48.487 "state": "online", 00:11:48.487 "raid_level": "concat", 00:11:48.487 "superblock": true, 00:11:48.487 "num_base_bdevs": 4, 00:11:48.487 "num_base_bdevs_discovered": 4, 00:11:48.487 "num_base_bdevs_operational": 4, 00:11:48.487 "base_bdevs_list": [ 00:11:48.487 { 00:11:48.487 "name": "pt1", 00:11:48.487 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.487 "is_configured": true, 00:11:48.487 "data_offset": 2048, 00:11:48.487 "data_size": 63488 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "name": "pt2", 00:11:48.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.487 "is_configured": true, 00:11:48.487 "data_offset": 2048, 00:11:48.487 "data_size": 63488 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "name": "pt3", 00:11:48.487 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.487 "is_configured": true, 00:11:48.487 "data_offset": 2048, 00:11:48.487 "data_size": 63488 00:11:48.487 }, 00:11:48.487 { 00:11:48.487 "name": "pt4", 00:11:48.487 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.487 "is_configured": true, 00:11:48.487 "data_offset": 2048, 00:11:48.487 "data_size": 63488 00:11:48.487 } 00:11:48.487 ] 00:11:48.487 } 00:11:48.487 } 00:11:48.487 }' 00:11:48.487 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.746 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.746 pt2 00:11:48.746 pt3 00:11:48.746 pt4' 00:11:48.746 09:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.746 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.005 [2024-11-20 09:24:14.238881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bf81fef9-8719-4c53-972c-7e0d2c2a50a5 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bf81fef9-8719-4c53-972c-7e0d2c2a50a5 ']' 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.005 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.005 [2024-11-20 09:24:14.270496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.006 [2024-11-20 09:24:14.270528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.006 [2024-11-20 09:24:14.270641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.006 [2024-11-20 09:24:14.270722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.006 [2024-11-20 09:24:14.270739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.006 [2024-11-20 09:24:14.438166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:49.006 [2024-11-20 09:24:14.440280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:49.006 [2024-11-20 09:24:14.440333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:49.006 [2024-11-20 09:24:14.440369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:49.006 [2024-11-20 09:24:14.440426] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:49.006 [2024-11-20 09:24:14.440508] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:49.006 [2024-11-20 09:24:14.440529] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:49.006 [2024-11-20 09:24:14.440550] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:49.006 [2024-11-20 09:24:14.440565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.006 [2024-11-20 09:24:14.440577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:49.006 request: 00:11:49.006 { 00:11:49.006 "name": "raid_bdev1", 00:11:49.006 "raid_level": "concat", 00:11:49.006 "base_bdevs": [ 00:11:49.006 "malloc1", 00:11:49.006 "malloc2", 00:11:49.006 "malloc3", 00:11:49.006 "malloc4" 00:11:49.006 ], 00:11:49.006 "strip_size_kb": 64, 00:11:49.006 "superblock": false, 00:11:49.006 "method": "bdev_raid_create", 00:11:49.006 "req_id": 1 00:11:49.006 } 00:11:49.006 Got JSON-RPC error response 00:11:49.006 response: 00:11:49.006 { 00:11:49.006 "code": -17, 00:11:49.006 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:49.006 } 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.006 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.264 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.264 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:49.264 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:49.264 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.265 [2024-11-20 09:24:14.490043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:49.265 [2024-11-20 09:24:14.490183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.265 [2024-11-20 09:24:14.490226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:49.265 [2024-11-20 09:24:14.490268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.265 [2024-11-20 09:24:14.492786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.265 [2024-11-20 09:24:14.492876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:49.265 [2024-11-20 09:24:14.492988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:49.265 [2024-11-20 09:24:14.493095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:49.265 pt1 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.265 "name": "raid_bdev1", 00:11:49.265 "uuid": "bf81fef9-8719-4c53-972c-7e0d2c2a50a5", 00:11:49.265 "strip_size_kb": 64, 00:11:49.265 "state": "configuring", 00:11:49.265 "raid_level": "concat", 00:11:49.265 "superblock": true, 00:11:49.265 "num_base_bdevs": 4, 00:11:49.265 "num_base_bdevs_discovered": 1, 00:11:49.265 "num_base_bdevs_operational": 4, 00:11:49.265 "base_bdevs_list": [ 00:11:49.265 { 00:11:49.265 "name": "pt1", 00:11:49.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.265 "is_configured": true, 00:11:49.265 "data_offset": 2048, 00:11:49.265 "data_size": 63488 00:11:49.265 }, 00:11:49.265 { 00:11:49.265 "name": null, 00:11:49.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.265 "is_configured": false, 00:11:49.265 "data_offset": 2048, 00:11:49.265 "data_size": 63488 00:11:49.265 }, 00:11:49.265 { 00:11:49.265 "name": null, 00:11:49.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.265 "is_configured": false, 00:11:49.265 "data_offset": 2048, 00:11:49.265 "data_size": 63488 00:11:49.265 }, 00:11:49.265 { 00:11:49.265 "name": null, 00:11:49.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.265 "is_configured": false, 00:11:49.265 "data_offset": 2048, 00:11:49.265 "data_size": 63488 00:11:49.265 } 00:11:49.265 ] 00:11:49.265 }' 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.265 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.523 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:49.523 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.523 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.523 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.523 [2024-11-20 09:24:14.961300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.523 [2024-11-20 09:24:14.961468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.523 [2024-11-20 09:24:14.961511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:49.523 [2024-11-20 09:24:14.961527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.523 [2024-11-20 09:24:14.962014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.523 [2024-11-20 09:24:14.962038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.523 [2024-11-20 09:24:14.962125] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.523 [2024-11-20 09:24:14.962151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.523 pt2 00:11:49.523 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.523 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:49.523 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.523 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.523 [2024-11-20 09:24:14.973284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.783 09:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.783 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.783 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.783 "name": "raid_bdev1", 00:11:49.783 "uuid": "bf81fef9-8719-4c53-972c-7e0d2c2a50a5", 00:11:49.783 "strip_size_kb": 64, 00:11:49.783 "state": "configuring", 00:11:49.783 "raid_level": "concat", 00:11:49.783 "superblock": true, 00:11:49.783 "num_base_bdevs": 4, 00:11:49.783 "num_base_bdevs_discovered": 1, 00:11:49.783 "num_base_bdevs_operational": 4, 00:11:49.783 "base_bdevs_list": [ 00:11:49.783 { 00:11:49.783 "name": "pt1", 00:11:49.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.783 "is_configured": true, 00:11:49.783 "data_offset": 2048, 00:11:49.783 "data_size": 63488 00:11:49.783 }, 00:11:49.783 { 00:11:49.783 "name": null, 00:11:49.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.783 "is_configured": false, 00:11:49.783 "data_offset": 0, 00:11:49.783 "data_size": 63488 00:11:49.783 }, 00:11:49.783 { 00:11:49.783 "name": null, 00:11:49.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.783 "is_configured": false, 00:11:49.783 "data_offset": 2048, 00:11:49.783 "data_size": 63488 00:11:49.783 }, 00:11:49.783 { 00:11:49.783 "name": null, 00:11:49.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.783 "is_configured": false, 00:11:49.783 "data_offset": 2048, 00:11:49.783 "data_size": 63488 00:11:49.783 } 00:11:49.783 ] 00:11:49.783 }' 00:11:49.783 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.783 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.044 [2024-11-20 09:24:15.468483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:50.044 [2024-11-20 09:24:15.468621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.044 [2024-11-20 09:24:15.468662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:50.044 [2024-11-20 09:24:15.468718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.044 [2024-11-20 09:24:15.469235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.044 [2024-11-20 09:24:15.469306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:50.044 [2024-11-20 09:24:15.469427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:50.044 [2024-11-20 09:24:15.469495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:50.044 pt2 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.044 [2024-11-20 09:24:15.480400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:50.044 [2024-11-20 09:24:15.480503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.044 [2024-11-20 09:24:15.480564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:50.044 [2024-11-20 09:24:15.480617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.044 [2024-11-20 09:24:15.481039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.044 [2024-11-20 09:24:15.481105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:50.044 [2024-11-20 09:24:15.481200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:50.044 [2024-11-20 09:24:15.481250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:50.044 pt3 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.044 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.044 [2024-11-20 09:24:15.492365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:50.044 [2024-11-20 09:24:15.492469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.044 [2024-11-20 09:24:15.492494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:50.044 [2024-11-20 09:24:15.492503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.044 [2024-11-20 09:24:15.492917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.044 [2024-11-20 09:24:15.492943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:50.044 [2024-11-20 09:24:15.493010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:50.044 [2024-11-20 09:24:15.493028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:50.044 [2024-11-20 09:24:15.493197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:50.044 [2024-11-20 09:24:15.493212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:50.044 [2024-11-20 09:24:15.493473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:50.044 [2024-11-20 09:24:15.493639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:50.044 [2024-11-20 09:24:15.493653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:50.044 [2024-11-20 09:24:15.493790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.304 pt4 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.304 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.304 "name": "raid_bdev1", 00:11:50.304 "uuid": "bf81fef9-8719-4c53-972c-7e0d2c2a50a5", 00:11:50.304 "strip_size_kb": 64, 00:11:50.304 "state": "online", 00:11:50.304 "raid_level": "concat", 00:11:50.304 "superblock": true, 00:11:50.304 "num_base_bdevs": 4, 00:11:50.304 "num_base_bdevs_discovered": 4, 00:11:50.304 "num_base_bdevs_operational": 4, 00:11:50.304 "base_bdevs_list": [ 00:11:50.304 { 00:11:50.304 "name": "pt1", 00:11:50.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.305 "is_configured": true, 00:11:50.305 "data_offset": 2048, 00:11:50.305 "data_size": 63488 00:11:50.305 }, 00:11:50.305 { 00:11:50.305 "name": "pt2", 00:11:50.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.305 "is_configured": true, 00:11:50.305 "data_offset": 2048, 00:11:50.305 "data_size": 63488 00:11:50.305 }, 00:11:50.305 { 00:11:50.305 "name": "pt3", 00:11:50.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.305 "is_configured": true, 00:11:50.305 "data_offset": 2048, 00:11:50.305 "data_size": 63488 00:11:50.305 }, 00:11:50.305 { 00:11:50.305 "name": "pt4", 00:11:50.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.305 "is_configured": true, 00:11:50.305 "data_offset": 2048, 00:11:50.305 "data_size": 63488 00:11:50.305 } 00:11:50.305 ] 00:11:50.305 }' 00:11:50.305 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.305 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.565 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.565 [2024-11-20 09:24:15.936086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.566 09:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.566 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.566 "name": "raid_bdev1", 00:11:50.566 "aliases": [ 00:11:50.566 "bf81fef9-8719-4c53-972c-7e0d2c2a50a5" 00:11:50.566 ], 00:11:50.566 "product_name": "Raid Volume", 00:11:50.566 "block_size": 512, 00:11:50.566 "num_blocks": 253952, 00:11:50.566 "uuid": "bf81fef9-8719-4c53-972c-7e0d2c2a50a5", 00:11:50.566 "assigned_rate_limits": { 00:11:50.566 "rw_ios_per_sec": 0, 00:11:50.566 "rw_mbytes_per_sec": 0, 00:11:50.566 "r_mbytes_per_sec": 0, 00:11:50.566 "w_mbytes_per_sec": 0 00:11:50.566 }, 00:11:50.566 "claimed": false, 00:11:50.566 "zoned": false, 00:11:50.566 "supported_io_types": { 00:11:50.566 "read": true, 00:11:50.566 "write": true, 00:11:50.566 "unmap": true, 00:11:50.566 "flush": true, 00:11:50.566 "reset": true, 00:11:50.566 "nvme_admin": false, 00:11:50.566 "nvme_io": false, 00:11:50.566 "nvme_io_md": false, 00:11:50.566 "write_zeroes": true, 00:11:50.566 "zcopy": false, 00:11:50.566 "get_zone_info": false, 00:11:50.566 "zone_management": false, 00:11:50.566 "zone_append": false, 00:11:50.566 "compare": false, 00:11:50.566 "compare_and_write": false, 00:11:50.566 "abort": false, 00:11:50.566 "seek_hole": false, 00:11:50.566 "seek_data": false, 00:11:50.566 "copy": false, 00:11:50.566 "nvme_iov_md": false 00:11:50.566 }, 00:11:50.566 "memory_domains": [ 00:11:50.566 { 00:11:50.566 "dma_device_id": "system", 00:11:50.566 "dma_device_type": 1 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.566 "dma_device_type": 2 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "dma_device_id": "system", 00:11:50.566 "dma_device_type": 1 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.566 "dma_device_type": 2 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "dma_device_id": "system", 00:11:50.566 "dma_device_type": 1 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.566 "dma_device_type": 2 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "dma_device_id": "system", 00:11:50.566 "dma_device_type": 1 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.566 "dma_device_type": 2 00:11:50.566 } 00:11:50.566 ], 00:11:50.566 "driver_specific": { 00:11:50.566 "raid": { 00:11:50.566 "uuid": "bf81fef9-8719-4c53-972c-7e0d2c2a50a5", 00:11:50.566 "strip_size_kb": 64, 00:11:50.566 "state": "online", 00:11:50.566 "raid_level": "concat", 00:11:50.566 "superblock": true, 00:11:50.566 "num_base_bdevs": 4, 00:11:50.566 "num_base_bdevs_discovered": 4, 00:11:50.566 "num_base_bdevs_operational": 4, 00:11:50.566 "base_bdevs_list": [ 00:11:50.566 { 00:11:50.566 "name": "pt1", 00:11:50.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.566 "is_configured": true, 00:11:50.566 "data_offset": 2048, 00:11:50.566 "data_size": 63488 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "name": "pt2", 00:11:50.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.566 "is_configured": true, 00:11:50.566 "data_offset": 2048, 00:11:50.566 "data_size": 63488 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "name": "pt3", 00:11:50.566 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.566 "is_configured": true, 00:11:50.566 "data_offset": 2048, 00:11:50.566 "data_size": 63488 00:11:50.566 }, 00:11:50.566 { 00:11:50.566 "name": "pt4", 00:11:50.566 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.566 "is_configured": true, 00:11:50.566 "data_offset": 2048, 00:11:50.566 "data_size": 63488 00:11:50.566 } 00:11:50.566 ] 00:11:50.566 } 00:11:50.566 } 00:11:50.566 }' 00:11:50.566 09:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.566 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:50.566 pt2 00:11:50.566 pt3 00:11:50.566 pt4' 00:11:50.566 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.826 [2024-11-20 09:24:16.227521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bf81fef9-8719-4c53-972c-7e0d2c2a50a5 '!=' bf81fef9-8719-4c53-972c-7e0d2c2a50a5 ']' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72981 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72981 ']' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72981 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.826 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72981 00:11:51.085 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.085 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.085 killing process with pid 72981 00:11:51.085 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72981' 00:11:51.085 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72981 00:11:51.085 [2024-11-20 09:24:16.300183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.085 [2024-11-20 09:24:16.300288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.085 09:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72981 00:11:51.085 [2024-11-20 09:24:16.300368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.085 [2024-11-20 09:24:16.300380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:51.344 [2024-11-20 09:24:16.731281] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.725 09:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:52.725 00:11:52.725 real 0m5.892s 00:11:52.725 user 0m8.448s 00:11:52.725 sys 0m0.991s 00:11:52.725 ************************************ 00:11:52.725 END TEST raid_superblock_test 00:11:52.725 ************************************ 00:11:52.725 09:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.725 09:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.725 09:24:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:52.725 09:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:52.725 09:24:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.725 09:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.725 ************************************ 00:11:52.725 START TEST raid_read_error_test 00:11:52.725 ************************************ 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8oQ7BHbBcE 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73251 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73251 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73251 ']' 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.725 09:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.725 [2024-11-20 09:24:18.144562] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:11:52.725 [2024-11-20 09:24:18.144700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73251 ] 00:11:52.984 [2024-11-20 09:24:18.326371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.243 [2024-11-20 09:24:18.455342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.243 [2024-11-20 09:24:18.685731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.243 [2024-11-20 09:24:18.685798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 BaseBdev1_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 true 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 [2024-11-20 09:24:19.132249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:53.863 [2024-11-20 09:24:19.132317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.863 [2024-11-20 09:24:19.132342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:53.863 [2024-11-20 09:24:19.132355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.863 [2024-11-20 09:24:19.134850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.863 [2024-11-20 09:24:19.134896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:53.863 BaseBdev1 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 BaseBdev2_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 true 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 [2024-11-20 09:24:19.204961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:53.863 [2024-11-20 09:24:19.205025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.863 [2024-11-20 09:24:19.205045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:53.863 [2024-11-20 09:24:19.205057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.863 [2024-11-20 09:24:19.207453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.863 [2024-11-20 09:24:19.207494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:53.863 BaseBdev2 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 BaseBdev3_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 true 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 [2024-11-20 09:24:19.288798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:53.863 [2024-11-20 09:24:19.288861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.863 [2024-11-20 09:24:19.288882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:53.863 [2024-11-20 09:24:19.288894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.863 [2024-11-20 09:24:19.291309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.863 [2024-11-20 09:24:19.291404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:53.863 BaseBdev3 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.863 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.125 BaseBdev4_malloc 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.125 true 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.125 [2024-11-20 09:24:19.360287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:54.125 [2024-11-20 09:24:19.360366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.125 [2024-11-20 09:24:19.360391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:54.125 [2024-11-20 09:24:19.360403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.125 [2024-11-20 09:24:19.362856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.125 [2024-11-20 09:24:19.362969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:54.125 BaseBdev4 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.125 [2024-11-20 09:24:19.372395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.125 [2024-11-20 09:24:19.374535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.125 [2024-11-20 09:24:19.374625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.125 [2024-11-20 09:24:19.374700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:54.125 [2024-11-20 09:24:19.374979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:54.125 [2024-11-20 09:24:19.374997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:54.125 [2024-11-20 09:24:19.375311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:54.125 [2024-11-20 09:24:19.375542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:54.125 [2024-11-20 09:24:19.375556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:54.125 [2024-11-20 09:24:19.375797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.125 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.125 "name": "raid_bdev1", 00:11:54.125 "uuid": "f98082cf-1281-4942-9011-ab1a40abf3b1", 00:11:54.125 "strip_size_kb": 64, 00:11:54.125 "state": "online", 00:11:54.125 "raid_level": "concat", 00:11:54.125 "superblock": true, 00:11:54.125 "num_base_bdevs": 4, 00:11:54.125 "num_base_bdevs_discovered": 4, 00:11:54.125 "num_base_bdevs_operational": 4, 00:11:54.125 "base_bdevs_list": [ 00:11:54.125 { 00:11:54.125 "name": "BaseBdev1", 00:11:54.125 "uuid": "997ef9c9-1430-59c7-8829-2ddef0d2e746", 00:11:54.125 "is_configured": true, 00:11:54.125 "data_offset": 2048, 00:11:54.125 "data_size": 63488 00:11:54.125 }, 00:11:54.125 { 00:11:54.125 "name": "BaseBdev2", 00:11:54.125 "uuid": "f552487c-3646-570b-aa72-5b2711d6676d", 00:11:54.125 "is_configured": true, 00:11:54.125 "data_offset": 2048, 00:11:54.125 "data_size": 63488 00:11:54.125 }, 00:11:54.125 { 00:11:54.125 "name": "BaseBdev3", 00:11:54.126 "uuid": "7ec775c6-c9b3-54ee-9637-6efe0450b582", 00:11:54.126 "is_configured": true, 00:11:54.126 "data_offset": 2048, 00:11:54.126 "data_size": 63488 00:11:54.126 }, 00:11:54.126 { 00:11:54.126 "name": "BaseBdev4", 00:11:54.126 "uuid": "103491fd-aa67-5378-9fe5-ec6f81147ab8", 00:11:54.126 "is_configured": true, 00:11:54.126 "data_offset": 2048, 00:11:54.126 "data_size": 63488 00:11:54.126 } 00:11:54.126 ] 00:11:54.126 }' 00:11:54.126 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.126 09:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.385 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:54.385 09:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:54.643 [2024-11-20 09:24:19.928838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.577 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.578 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.578 09:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.578 09:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.578 09:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.578 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.578 "name": "raid_bdev1", 00:11:55.578 "uuid": "f98082cf-1281-4942-9011-ab1a40abf3b1", 00:11:55.578 "strip_size_kb": 64, 00:11:55.578 "state": "online", 00:11:55.578 "raid_level": "concat", 00:11:55.578 "superblock": true, 00:11:55.578 "num_base_bdevs": 4, 00:11:55.578 "num_base_bdevs_discovered": 4, 00:11:55.578 "num_base_bdevs_operational": 4, 00:11:55.578 "base_bdevs_list": [ 00:11:55.578 { 00:11:55.578 "name": "BaseBdev1", 00:11:55.578 "uuid": "997ef9c9-1430-59c7-8829-2ddef0d2e746", 00:11:55.578 "is_configured": true, 00:11:55.578 "data_offset": 2048, 00:11:55.578 "data_size": 63488 00:11:55.578 }, 00:11:55.578 { 00:11:55.578 "name": "BaseBdev2", 00:11:55.578 "uuid": "f552487c-3646-570b-aa72-5b2711d6676d", 00:11:55.578 "is_configured": true, 00:11:55.578 "data_offset": 2048, 00:11:55.578 "data_size": 63488 00:11:55.578 }, 00:11:55.578 { 00:11:55.578 "name": "BaseBdev3", 00:11:55.578 "uuid": "7ec775c6-c9b3-54ee-9637-6efe0450b582", 00:11:55.578 "is_configured": true, 00:11:55.578 "data_offset": 2048, 00:11:55.578 "data_size": 63488 00:11:55.578 }, 00:11:55.578 { 00:11:55.578 "name": "BaseBdev4", 00:11:55.578 "uuid": "103491fd-aa67-5378-9fe5-ec6f81147ab8", 00:11:55.578 "is_configured": true, 00:11:55.578 "data_offset": 2048, 00:11:55.578 "data_size": 63488 00:11:55.578 } 00:11:55.578 ] 00:11:55.578 }' 00:11:55.578 09:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.578 09:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.142 [2024-11-20 09:24:21.305539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:56.142 [2024-11-20 09:24:21.305669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.142 [2024-11-20 09:24:21.308774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.142 [2024-11-20 09:24:21.308890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.142 [2024-11-20 09:24:21.308958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.142 [2024-11-20 09:24:21.309015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:56.142 { 00:11:56.142 "results": [ 00:11:56.142 { 00:11:56.142 "job": "raid_bdev1", 00:11:56.142 "core_mask": "0x1", 00:11:56.142 "workload": "randrw", 00:11:56.142 "percentage": 50, 00:11:56.142 "status": "finished", 00:11:56.142 "queue_depth": 1, 00:11:56.142 "io_size": 131072, 00:11:56.142 "runtime": 1.377446, 00:11:56.142 "iops": 14442.671436847615, 00:11:56.142 "mibps": 1805.3339296059519, 00:11:56.142 "io_failed": 1, 00:11:56.142 "io_timeout": 0, 00:11:56.142 "avg_latency_us": 96.1267018660193, 00:11:56.142 "min_latency_us": 27.94759825327511, 00:11:56.142 "max_latency_us": 1724.2550218340612 00:11:56.142 } 00:11:56.142 ], 00:11:56.142 "core_count": 1 00:11:56.142 } 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73251 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73251 ']' 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73251 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73251 00:11:56.142 killing process with pid 73251 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73251' 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73251 00:11:56.142 [2024-11-20 09:24:21.354657] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.142 09:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73251 00:11:56.401 [2024-11-20 09:24:21.706316] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.810 09:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:57.810 09:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8oQ7BHbBcE 00:11:57.810 09:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:57.810 09:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:57.810 09:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:57.810 09:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.810 09:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:57.810 09:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:57.810 ************************************ 00:11:57.810 END TEST raid_read_error_test 00:11:57.810 ************************************ 00:11:57.810 00:11:57.810 real 0m4.981s 00:11:57.810 user 0m5.937s 00:11:57.810 sys 0m0.607s 00:11:57.810 09:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.810 09:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.810 09:24:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:57.810 09:24:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:57.810 09:24:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.810 09:24:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.810 ************************************ 00:11:57.810 START TEST raid_write_error_test 00:11:57.810 ************************************ 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LhJp2GoXId 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73398 00:11:57.810 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:57.811 09:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73398 00:11:57.811 09:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73398 ']' 00:11:57.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.811 09:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.811 09:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.811 09:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.811 09:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.811 09:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.811 [2024-11-20 09:24:23.197636] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:11:57.811 [2024-11-20 09:24:23.197766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73398 ] 00:11:58.069 [2024-11-20 09:24:23.358712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.069 [2024-11-20 09:24:23.485510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.327 [2024-11-20 09:24:23.717164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.327 [2024-11-20 09:24:23.717311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.895 BaseBdev1_malloc 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.895 true 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.895 [2024-11-20 09:24:24.118473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:58.895 [2024-11-20 09:24:24.118544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.895 [2024-11-20 09:24:24.118568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:58.895 [2024-11-20 09:24:24.118581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.895 [2024-11-20 09:24:24.121107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.895 [2024-11-20 09:24:24.121161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:58.895 BaseBdev1 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:58.895 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.896 BaseBdev2_malloc 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.896 true 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.896 [2024-11-20 09:24:24.190563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:58.896 [2024-11-20 09:24:24.190631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.896 [2024-11-20 09:24:24.190652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:58.896 [2024-11-20 09:24:24.190664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.896 [2024-11-20 09:24:24.193106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.896 [2024-11-20 09:24:24.193207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:58.896 BaseBdev2 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.896 BaseBdev3_malloc 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.896 true 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.896 [2024-11-20 09:24:24.278519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:58.896 [2024-11-20 09:24:24.278644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.896 [2024-11-20 09:24:24.278674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:58.896 [2024-11-20 09:24:24.278686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.896 [2024-11-20 09:24:24.281523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.896 [2024-11-20 09:24:24.281570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:58.896 BaseBdev3 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.896 BaseBdev4_malloc 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.896 true 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.896 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.156 [2024-11-20 09:24:24.351299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:59.156 [2024-11-20 09:24:24.351415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.156 [2024-11-20 09:24:24.351470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:59.156 [2024-11-20 09:24:24.351492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.156 [2024-11-20 09:24:24.354584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.156 [2024-11-20 09:24:24.354636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:59.156 BaseBdev4 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.156 [2024-11-20 09:24:24.363507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.156 [2024-11-20 09:24:24.365670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.156 [2024-11-20 09:24:24.365763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.156 [2024-11-20 09:24:24.365841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:59.156 [2024-11-20 09:24:24.366110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:59.156 [2024-11-20 09:24:24.366130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:59.156 [2024-11-20 09:24:24.366454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:59.156 [2024-11-20 09:24:24.366648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:59.156 [2024-11-20 09:24:24.366660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:59.156 [2024-11-20 09:24:24.366847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.156 "name": "raid_bdev1", 00:11:59.156 "uuid": "8cf7b0a2-f1ee-4f0e-a8df-1401784ad12d", 00:11:59.156 "strip_size_kb": 64, 00:11:59.156 "state": "online", 00:11:59.156 "raid_level": "concat", 00:11:59.156 "superblock": true, 00:11:59.156 "num_base_bdevs": 4, 00:11:59.156 "num_base_bdevs_discovered": 4, 00:11:59.156 "num_base_bdevs_operational": 4, 00:11:59.156 "base_bdevs_list": [ 00:11:59.156 { 00:11:59.156 "name": "BaseBdev1", 00:11:59.156 "uuid": "8103a2f6-8c59-5857-818a-73efb63892ff", 00:11:59.156 "is_configured": true, 00:11:59.156 "data_offset": 2048, 00:11:59.156 "data_size": 63488 00:11:59.156 }, 00:11:59.156 { 00:11:59.156 "name": "BaseBdev2", 00:11:59.156 "uuid": "ba39397f-e168-56b4-a7d2-d86ba2060104", 00:11:59.156 "is_configured": true, 00:11:59.156 "data_offset": 2048, 00:11:59.156 "data_size": 63488 00:11:59.156 }, 00:11:59.156 { 00:11:59.156 "name": "BaseBdev3", 00:11:59.156 "uuid": "752ea84a-fa57-539f-8054-c8592b314eda", 00:11:59.156 "is_configured": true, 00:11:59.156 "data_offset": 2048, 00:11:59.156 "data_size": 63488 00:11:59.156 }, 00:11:59.156 { 00:11:59.156 "name": "BaseBdev4", 00:11:59.156 "uuid": "8189356b-9c1b-5494-81fb-ae63ec424610", 00:11:59.156 "is_configured": true, 00:11:59.156 "data_offset": 2048, 00:11:59.156 "data_size": 63488 00:11:59.156 } 00:11:59.156 ] 00:11:59.156 }' 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.156 09:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.425 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:59.425 09:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:59.700 [2024-11-20 09:24:24.892170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.640 "name": "raid_bdev1", 00:12:00.640 "uuid": "8cf7b0a2-f1ee-4f0e-a8df-1401784ad12d", 00:12:00.640 "strip_size_kb": 64, 00:12:00.640 "state": "online", 00:12:00.640 "raid_level": "concat", 00:12:00.640 "superblock": true, 00:12:00.640 "num_base_bdevs": 4, 00:12:00.640 "num_base_bdevs_discovered": 4, 00:12:00.640 "num_base_bdevs_operational": 4, 00:12:00.640 "base_bdevs_list": [ 00:12:00.640 { 00:12:00.640 "name": "BaseBdev1", 00:12:00.640 "uuid": "8103a2f6-8c59-5857-818a-73efb63892ff", 00:12:00.640 "is_configured": true, 00:12:00.640 "data_offset": 2048, 00:12:00.640 "data_size": 63488 00:12:00.640 }, 00:12:00.640 { 00:12:00.640 "name": "BaseBdev2", 00:12:00.640 "uuid": "ba39397f-e168-56b4-a7d2-d86ba2060104", 00:12:00.640 "is_configured": true, 00:12:00.640 "data_offset": 2048, 00:12:00.640 "data_size": 63488 00:12:00.640 }, 00:12:00.640 { 00:12:00.640 "name": "BaseBdev3", 00:12:00.640 "uuid": "752ea84a-fa57-539f-8054-c8592b314eda", 00:12:00.640 "is_configured": true, 00:12:00.640 "data_offset": 2048, 00:12:00.640 "data_size": 63488 00:12:00.640 }, 00:12:00.640 { 00:12:00.640 "name": "BaseBdev4", 00:12:00.640 "uuid": "8189356b-9c1b-5494-81fb-ae63ec424610", 00:12:00.640 "is_configured": true, 00:12:00.640 "data_offset": 2048, 00:12:00.640 "data_size": 63488 00:12:00.640 } 00:12:00.640 ] 00:12:00.640 }' 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.640 09:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.900 [2024-11-20 09:24:26.293826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.900 [2024-11-20 09:24:26.293867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.900 [2024-11-20 09:24:26.297103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.900 [2024-11-20 09:24:26.297239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.900 [2024-11-20 09:24:26.297300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.900 [2024-11-20 09:24:26.297318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:00.900 { 00:12:00.900 "results": [ 00:12:00.900 { 00:12:00.900 "job": "raid_bdev1", 00:12:00.900 "core_mask": "0x1", 00:12:00.900 "workload": "randrw", 00:12:00.900 "percentage": 50, 00:12:00.900 "status": "finished", 00:12:00.900 "queue_depth": 1, 00:12:00.900 "io_size": 131072, 00:12:00.900 "runtime": 1.402132, 00:12:00.900 "iops": 14181.974307697135, 00:12:00.900 "mibps": 1772.746788462142, 00:12:00.900 "io_failed": 1, 00:12:00.900 "io_timeout": 0, 00:12:00.900 "avg_latency_us": 97.98233722611901, 00:12:00.900 "min_latency_us": 27.165065502183406, 00:12:00.900 "max_latency_us": 1745.7187772925763 00:12:00.900 } 00:12:00.900 ], 00:12:00.900 "core_count": 1 00:12:00.900 } 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73398 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73398 ']' 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73398 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73398 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.900 killing process with pid 73398 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73398' 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73398 00:12:00.900 [2024-11-20 09:24:26.343752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.900 09:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73398 00:12:01.475 [2024-11-20 09:24:26.715795] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LhJp2GoXId 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:02.868 ************************************ 00:12:02.868 END TEST raid_write_error_test 00:12:02.868 ************************************ 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:02.868 00:12:02.868 real 0m4.957s 00:12:02.868 user 0m5.842s 00:12:02.868 sys 0m0.574s 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.868 09:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.868 09:24:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:02.868 09:24:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:02.868 09:24:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:02.868 09:24:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.868 09:24:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.868 ************************************ 00:12:02.868 START TEST raid_state_function_test 00:12:02.868 ************************************ 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73546 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73546' 00:12:02.868 Process raid pid: 73546 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73546 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73546 ']' 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.868 09:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.868 [2024-11-20 09:24:28.214342] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:02.868 [2024-11-20 09:24:28.214614] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.128 [2024-11-20 09:24:28.398546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.128 [2024-11-20 09:24:28.533513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.388 [2024-11-20 09:24:28.774074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.388 [2024-11-20 09:24:28.774127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.956 [2024-11-20 09:24:29.136204] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.956 [2024-11-20 09:24:29.136276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.956 [2024-11-20 09:24:29.136288] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.956 [2024-11-20 09:24:29.136300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.956 [2024-11-20 09:24:29.136308] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.956 [2024-11-20 09:24:29.136319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.956 [2024-11-20 09:24:29.136326] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.956 [2024-11-20 09:24:29.136336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.956 "name": "Existed_Raid", 00:12:03.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.956 "strip_size_kb": 0, 00:12:03.956 "state": "configuring", 00:12:03.956 "raid_level": "raid1", 00:12:03.956 "superblock": false, 00:12:03.956 "num_base_bdevs": 4, 00:12:03.956 "num_base_bdevs_discovered": 0, 00:12:03.956 "num_base_bdevs_operational": 4, 00:12:03.956 "base_bdevs_list": [ 00:12:03.956 { 00:12:03.956 "name": "BaseBdev1", 00:12:03.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.956 "is_configured": false, 00:12:03.956 "data_offset": 0, 00:12:03.956 "data_size": 0 00:12:03.956 }, 00:12:03.956 { 00:12:03.956 "name": "BaseBdev2", 00:12:03.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.956 "is_configured": false, 00:12:03.956 "data_offset": 0, 00:12:03.956 "data_size": 0 00:12:03.956 }, 00:12:03.956 { 00:12:03.956 "name": "BaseBdev3", 00:12:03.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.956 "is_configured": false, 00:12:03.956 "data_offset": 0, 00:12:03.956 "data_size": 0 00:12:03.956 }, 00:12:03.956 { 00:12:03.956 "name": "BaseBdev4", 00:12:03.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.956 "is_configured": false, 00:12:03.956 "data_offset": 0, 00:12:03.956 "data_size": 0 00:12:03.956 } 00:12:03.956 ] 00:12:03.956 }' 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.956 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.215 [2024-11-20 09:24:29.631400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.215 [2024-11-20 09:24:29.631546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.215 [2024-11-20 09:24:29.643358] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:04.215 [2024-11-20 09:24:29.643403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:04.215 [2024-11-20 09:24:29.643414] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.215 [2024-11-20 09:24:29.643425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.215 [2024-11-20 09:24:29.643448] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.215 [2024-11-20 09:24:29.643458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.215 [2024-11-20 09:24:29.643465] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:04.215 [2024-11-20 09:24:29.643475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.215 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.474 [2024-11-20 09:24:29.696206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.474 BaseBdev1 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.474 [ 00:12:04.474 { 00:12:04.474 "name": "BaseBdev1", 00:12:04.474 "aliases": [ 00:12:04.474 "e8e62c1c-07e3-44c6-8488-bd5bd06f56c3" 00:12:04.474 ], 00:12:04.474 "product_name": "Malloc disk", 00:12:04.474 "block_size": 512, 00:12:04.474 "num_blocks": 65536, 00:12:04.474 "uuid": "e8e62c1c-07e3-44c6-8488-bd5bd06f56c3", 00:12:04.474 "assigned_rate_limits": { 00:12:04.474 "rw_ios_per_sec": 0, 00:12:04.474 "rw_mbytes_per_sec": 0, 00:12:04.474 "r_mbytes_per_sec": 0, 00:12:04.474 "w_mbytes_per_sec": 0 00:12:04.474 }, 00:12:04.474 "claimed": true, 00:12:04.474 "claim_type": "exclusive_write", 00:12:04.474 "zoned": false, 00:12:04.474 "supported_io_types": { 00:12:04.474 "read": true, 00:12:04.474 "write": true, 00:12:04.474 "unmap": true, 00:12:04.474 "flush": true, 00:12:04.474 "reset": true, 00:12:04.474 "nvme_admin": false, 00:12:04.474 "nvme_io": false, 00:12:04.474 "nvme_io_md": false, 00:12:04.474 "write_zeroes": true, 00:12:04.474 "zcopy": true, 00:12:04.474 "get_zone_info": false, 00:12:04.474 "zone_management": false, 00:12:04.474 "zone_append": false, 00:12:04.474 "compare": false, 00:12:04.474 "compare_and_write": false, 00:12:04.474 "abort": true, 00:12:04.474 "seek_hole": false, 00:12:04.474 "seek_data": false, 00:12:04.474 "copy": true, 00:12:04.474 "nvme_iov_md": false 00:12:04.474 }, 00:12:04.474 "memory_domains": [ 00:12:04.474 { 00:12:04.474 "dma_device_id": "system", 00:12:04.474 "dma_device_type": 1 00:12:04.474 }, 00:12:04.474 { 00:12:04.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.474 "dma_device_type": 2 00:12:04.474 } 00:12:04.474 ], 00:12:04.474 "driver_specific": {} 00:12:04.474 } 00:12:04.474 ] 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.474 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.475 "name": "Existed_Raid", 00:12:04.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.475 "strip_size_kb": 0, 00:12:04.475 "state": "configuring", 00:12:04.475 "raid_level": "raid1", 00:12:04.475 "superblock": false, 00:12:04.475 "num_base_bdevs": 4, 00:12:04.475 "num_base_bdevs_discovered": 1, 00:12:04.475 "num_base_bdevs_operational": 4, 00:12:04.475 "base_bdevs_list": [ 00:12:04.475 { 00:12:04.475 "name": "BaseBdev1", 00:12:04.475 "uuid": "e8e62c1c-07e3-44c6-8488-bd5bd06f56c3", 00:12:04.475 "is_configured": true, 00:12:04.475 "data_offset": 0, 00:12:04.475 "data_size": 65536 00:12:04.475 }, 00:12:04.475 { 00:12:04.475 "name": "BaseBdev2", 00:12:04.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.475 "is_configured": false, 00:12:04.475 "data_offset": 0, 00:12:04.475 "data_size": 0 00:12:04.475 }, 00:12:04.475 { 00:12:04.475 "name": "BaseBdev3", 00:12:04.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.475 "is_configured": false, 00:12:04.475 "data_offset": 0, 00:12:04.475 "data_size": 0 00:12:04.475 }, 00:12:04.475 { 00:12:04.475 "name": "BaseBdev4", 00:12:04.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.475 "is_configured": false, 00:12:04.475 "data_offset": 0, 00:12:04.475 "data_size": 0 00:12:04.475 } 00:12:04.475 ] 00:12:04.475 }' 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.475 09:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.042 [2024-11-20 09:24:30.215394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.042 [2024-11-20 09:24:30.215547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.042 [2024-11-20 09:24:30.227413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.042 [2024-11-20 09:24:30.229462] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.042 [2024-11-20 09:24:30.229507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.042 [2024-11-20 09:24:30.229518] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.042 [2024-11-20 09:24:30.229530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.042 [2024-11-20 09:24:30.229537] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.042 [2024-11-20 09:24:30.229547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.042 "name": "Existed_Raid", 00:12:05.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.042 "strip_size_kb": 0, 00:12:05.042 "state": "configuring", 00:12:05.042 "raid_level": "raid1", 00:12:05.042 "superblock": false, 00:12:05.042 "num_base_bdevs": 4, 00:12:05.042 "num_base_bdevs_discovered": 1, 00:12:05.042 "num_base_bdevs_operational": 4, 00:12:05.042 "base_bdevs_list": [ 00:12:05.042 { 00:12:05.042 "name": "BaseBdev1", 00:12:05.042 "uuid": "e8e62c1c-07e3-44c6-8488-bd5bd06f56c3", 00:12:05.042 "is_configured": true, 00:12:05.042 "data_offset": 0, 00:12:05.042 "data_size": 65536 00:12:05.042 }, 00:12:05.042 { 00:12:05.042 "name": "BaseBdev2", 00:12:05.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.042 "is_configured": false, 00:12:05.042 "data_offset": 0, 00:12:05.042 "data_size": 0 00:12:05.042 }, 00:12:05.042 { 00:12:05.042 "name": "BaseBdev3", 00:12:05.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.042 "is_configured": false, 00:12:05.042 "data_offset": 0, 00:12:05.042 "data_size": 0 00:12:05.042 }, 00:12:05.042 { 00:12:05.042 "name": "BaseBdev4", 00:12:05.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.042 "is_configured": false, 00:12:05.042 "data_offset": 0, 00:12:05.042 "data_size": 0 00:12:05.042 } 00:12:05.042 ] 00:12:05.042 }' 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.042 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.301 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.301 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.301 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.559 [2024-11-20 09:24:30.776777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.559 BaseBdev2 00:12:05.559 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.559 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:05.559 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:05.559 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.559 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.560 [ 00:12:05.560 { 00:12:05.560 "name": "BaseBdev2", 00:12:05.560 "aliases": [ 00:12:05.560 "31a26dfa-3b27-4119-84aa-60a8302b12e8" 00:12:05.560 ], 00:12:05.560 "product_name": "Malloc disk", 00:12:05.560 "block_size": 512, 00:12:05.560 "num_blocks": 65536, 00:12:05.560 "uuid": "31a26dfa-3b27-4119-84aa-60a8302b12e8", 00:12:05.560 "assigned_rate_limits": { 00:12:05.560 "rw_ios_per_sec": 0, 00:12:05.560 "rw_mbytes_per_sec": 0, 00:12:05.560 "r_mbytes_per_sec": 0, 00:12:05.560 "w_mbytes_per_sec": 0 00:12:05.560 }, 00:12:05.560 "claimed": true, 00:12:05.560 "claim_type": "exclusive_write", 00:12:05.560 "zoned": false, 00:12:05.560 "supported_io_types": { 00:12:05.560 "read": true, 00:12:05.560 "write": true, 00:12:05.560 "unmap": true, 00:12:05.560 "flush": true, 00:12:05.560 "reset": true, 00:12:05.560 "nvme_admin": false, 00:12:05.560 "nvme_io": false, 00:12:05.560 "nvme_io_md": false, 00:12:05.560 "write_zeroes": true, 00:12:05.560 "zcopy": true, 00:12:05.560 "get_zone_info": false, 00:12:05.560 "zone_management": false, 00:12:05.560 "zone_append": false, 00:12:05.560 "compare": false, 00:12:05.560 "compare_and_write": false, 00:12:05.560 "abort": true, 00:12:05.560 "seek_hole": false, 00:12:05.560 "seek_data": false, 00:12:05.560 "copy": true, 00:12:05.560 "nvme_iov_md": false 00:12:05.560 }, 00:12:05.560 "memory_domains": [ 00:12:05.560 { 00:12:05.560 "dma_device_id": "system", 00:12:05.560 "dma_device_type": 1 00:12:05.560 }, 00:12:05.560 { 00:12:05.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.560 "dma_device_type": 2 00:12:05.560 } 00:12:05.560 ], 00:12:05.560 "driver_specific": {} 00:12:05.560 } 00:12:05.560 ] 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.560 "name": "Existed_Raid", 00:12:05.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.560 "strip_size_kb": 0, 00:12:05.560 "state": "configuring", 00:12:05.560 "raid_level": "raid1", 00:12:05.560 "superblock": false, 00:12:05.560 "num_base_bdevs": 4, 00:12:05.560 "num_base_bdevs_discovered": 2, 00:12:05.560 "num_base_bdevs_operational": 4, 00:12:05.560 "base_bdevs_list": [ 00:12:05.560 { 00:12:05.560 "name": "BaseBdev1", 00:12:05.560 "uuid": "e8e62c1c-07e3-44c6-8488-bd5bd06f56c3", 00:12:05.560 "is_configured": true, 00:12:05.560 "data_offset": 0, 00:12:05.560 "data_size": 65536 00:12:05.560 }, 00:12:05.560 { 00:12:05.560 "name": "BaseBdev2", 00:12:05.560 "uuid": "31a26dfa-3b27-4119-84aa-60a8302b12e8", 00:12:05.560 "is_configured": true, 00:12:05.560 "data_offset": 0, 00:12:05.560 "data_size": 65536 00:12:05.560 }, 00:12:05.560 { 00:12:05.560 "name": "BaseBdev3", 00:12:05.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.560 "is_configured": false, 00:12:05.560 "data_offset": 0, 00:12:05.560 "data_size": 0 00:12:05.560 }, 00:12:05.560 { 00:12:05.560 "name": "BaseBdev4", 00:12:05.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.560 "is_configured": false, 00:12:05.560 "data_offset": 0, 00:12:05.560 "data_size": 0 00:12:05.560 } 00:12:05.560 ] 00:12:05.560 }' 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.560 09:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 [2024-11-20 09:24:31.350415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.127 BaseBdev3 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.127 [ 00:12:06.127 { 00:12:06.127 "name": "BaseBdev3", 00:12:06.127 "aliases": [ 00:12:06.127 "8cb65f51-7c10-474f-a9d1-7af26913c390" 00:12:06.127 ], 00:12:06.127 "product_name": "Malloc disk", 00:12:06.127 "block_size": 512, 00:12:06.127 "num_blocks": 65536, 00:12:06.127 "uuid": "8cb65f51-7c10-474f-a9d1-7af26913c390", 00:12:06.127 "assigned_rate_limits": { 00:12:06.127 "rw_ios_per_sec": 0, 00:12:06.127 "rw_mbytes_per_sec": 0, 00:12:06.127 "r_mbytes_per_sec": 0, 00:12:06.127 "w_mbytes_per_sec": 0 00:12:06.127 }, 00:12:06.127 "claimed": true, 00:12:06.127 "claim_type": "exclusive_write", 00:12:06.127 "zoned": false, 00:12:06.127 "supported_io_types": { 00:12:06.127 "read": true, 00:12:06.127 "write": true, 00:12:06.127 "unmap": true, 00:12:06.127 "flush": true, 00:12:06.127 "reset": true, 00:12:06.127 "nvme_admin": false, 00:12:06.127 "nvme_io": false, 00:12:06.127 "nvme_io_md": false, 00:12:06.127 "write_zeroes": true, 00:12:06.127 "zcopy": true, 00:12:06.127 "get_zone_info": false, 00:12:06.127 "zone_management": false, 00:12:06.127 "zone_append": false, 00:12:06.127 "compare": false, 00:12:06.127 "compare_and_write": false, 00:12:06.127 "abort": true, 00:12:06.127 "seek_hole": false, 00:12:06.127 "seek_data": false, 00:12:06.127 "copy": true, 00:12:06.127 "nvme_iov_md": false 00:12:06.127 }, 00:12:06.127 "memory_domains": [ 00:12:06.127 { 00:12:06.127 "dma_device_id": "system", 00:12:06.127 "dma_device_type": 1 00:12:06.127 }, 00:12:06.127 { 00:12:06.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.127 "dma_device_type": 2 00:12:06.127 } 00:12:06.127 ], 00:12:06.127 "driver_specific": {} 00:12:06.127 } 00:12:06.127 ] 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.127 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.128 "name": "Existed_Raid", 00:12:06.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.128 "strip_size_kb": 0, 00:12:06.128 "state": "configuring", 00:12:06.128 "raid_level": "raid1", 00:12:06.128 "superblock": false, 00:12:06.128 "num_base_bdevs": 4, 00:12:06.128 "num_base_bdevs_discovered": 3, 00:12:06.128 "num_base_bdevs_operational": 4, 00:12:06.128 "base_bdevs_list": [ 00:12:06.128 { 00:12:06.128 "name": "BaseBdev1", 00:12:06.128 "uuid": "e8e62c1c-07e3-44c6-8488-bd5bd06f56c3", 00:12:06.128 "is_configured": true, 00:12:06.128 "data_offset": 0, 00:12:06.128 "data_size": 65536 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "name": "BaseBdev2", 00:12:06.128 "uuid": "31a26dfa-3b27-4119-84aa-60a8302b12e8", 00:12:06.128 "is_configured": true, 00:12:06.128 "data_offset": 0, 00:12:06.128 "data_size": 65536 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "name": "BaseBdev3", 00:12:06.128 "uuid": "8cb65f51-7c10-474f-a9d1-7af26913c390", 00:12:06.128 "is_configured": true, 00:12:06.128 "data_offset": 0, 00:12:06.128 "data_size": 65536 00:12:06.128 }, 00:12:06.128 { 00:12:06.128 "name": "BaseBdev4", 00:12:06.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.128 "is_configured": false, 00:12:06.128 "data_offset": 0, 00:12:06.128 "data_size": 0 00:12:06.128 } 00:12:06.128 ] 00:12:06.128 }' 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.128 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.386 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:06.386 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.386 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.644 [2024-11-20 09:24:31.870077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:06.644 [2024-11-20 09:24:31.870139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:06.644 [2024-11-20 09:24:31.870149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:06.644 [2024-11-20 09:24:31.870487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:06.644 [2024-11-20 09:24:31.870704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:06.644 [2024-11-20 09:24:31.870728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:06.644 [2024-11-20 09:24:31.871052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.644 BaseBdev4 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.644 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.644 [ 00:12:06.644 { 00:12:06.644 "name": "BaseBdev4", 00:12:06.644 "aliases": [ 00:12:06.644 "067c442d-a4d0-44bb-9742-56d4b26050c4" 00:12:06.644 ], 00:12:06.644 "product_name": "Malloc disk", 00:12:06.644 "block_size": 512, 00:12:06.644 "num_blocks": 65536, 00:12:06.644 "uuid": "067c442d-a4d0-44bb-9742-56d4b26050c4", 00:12:06.644 "assigned_rate_limits": { 00:12:06.644 "rw_ios_per_sec": 0, 00:12:06.644 "rw_mbytes_per_sec": 0, 00:12:06.644 "r_mbytes_per_sec": 0, 00:12:06.644 "w_mbytes_per_sec": 0 00:12:06.644 }, 00:12:06.644 "claimed": true, 00:12:06.644 "claim_type": "exclusive_write", 00:12:06.644 "zoned": false, 00:12:06.644 "supported_io_types": { 00:12:06.644 "read": true, 00:12:06.644 "write": true, 00:12:06.644 "unmap": true, 00:12:06.644 "flush": true, 00:12:06.645 "reset": true, 00:12:06.645 "nvme_admin": false, 00:12:06.645 "nvme_io": false, 00:12:06.645 "nvme_io_md": false, 00:12:06.645 "write_zeroes": true, 00:12:06.645 "zcopy": true, 00:12:06.645 "get_zone_info": false, 00:12:06.645 "zone_management": false, 00:12:06.645 "zone_append": false, 00:12:06.645 "compare": false, 00:12:06.645 "compare_and_write": false, 00:12:06.645 "abort": true, 00:12:06.645 "seek_hole": false, 00:12:06.645 "seek_data": false, 00:12:06.645 "copy": true, 00:12:06.645 "nvme_iov_md": false 00:12:06.645 }, 00:12:06.645 "memory_domains": [ 00:12:06.645 { 00:12:06.645 "dma_device_id": "system", 00:12:06.645 "dma_device_type": 1 00:12:06.645 }, 00:12:06.645 { 00:12:06.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.645 "dma_device_type": 2 00:12:06.645 } 00:12:06.645 ], 00:12:06.645 "driver_specific": {} 00:12:06.645 } 00:12:06.645 ] 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.645 "name": "Existed_Raid", 00:12:06.645 "uuid": "5925e503-e74b-4d68-968b-1e56863c8a88", 00:12:06.645 "strip_size_kb": 0, 00:12:06.645 "state": "online", 00:12:06.645 "raid_level": "raid1", 00:12:06.645 "superblock": false, 00:12:06.645 "num_base_bdevs": 4, 00:12:06.645 "num_base_bdevs_discovered": 4, 00:12:06.645 "num_base_bdevs_operational": 4, 00:12:06.645 "base_bdevs_list": [ 00:12:06.645 { 00:12:06.645 "name": "BaseBdev1", 00:12:06.645 "uuid": "e8e62c1c-07e3-44c6-8488-bd5bd06f56c3", 00:12:06.645 "is_configured": true, 00:12:06.645 "data_offset": 0, 00:12:06.645 "data_size": 65536 00:12:06.645 }, 00:12:06.645 { 00:12:06.645 "name": "BaseBdev2", 00:12:06.645 "uuid": "31a26dfa-3b27-4119-84aa-60a8302b12e8", 00:12:06.645 "is_configured": true, 00:12:06.645 "data_offset": 0, 00:12:06.645 "data_size": 65536 00:12:06.645 }, 00:12:06.645 { 00:12:06.645 "name": "BaseBdev3", 00:12:06.645 "uuid": "8cb65f51-7c10-474f-a9d1-7af26913c390", 00:12:06.645 "is_configured": true, 00:12:06.645 "data_offset": 0, 00:12:06.645 "data_size": 65536 00:12:06.645 }, 00:12:06.645 { 00:12:06.645 "name": "BaseBdev4", 00:12:06.645 "uuid": "067c442d-a4d0-44bb-9742-56d4b26050c4", 00:12:06.645 "is_configured": true, 00:12:06.645 "data_offset": 0, 00:12:06.645 "data_size": 65536 00:12:06.645 } 00:12:06.645 ] 00:12:06.645 }' 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.645 09:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.212 [2024-11-20 09:24:32.369738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.212 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:07.212 "name": "Existed_Raid", 00:12:07.212 "aliases": [ 00:12:07.212 "5925e503-e74b-4d68-968b-1e56863c8a88" 00:12:07.212 ], 00:12:07.212 "product_name": "Raid Volume", 00:12:07.212 "block_size": 512, 00:12:07.212 "num_blocks": 65536, 00:12:07.212 "uuid": "5925e503-e74b-4d68-968b-1e56863c8a88", 00:12:07.212 "assigned_rate_limits": { 00:12:07.212 "rw_ios_per_sec": 0, 00:12:07.212 "rw_mbytes_per_sec": 0, 00:12:07.212 "r_mbytes_per_sec": 0, 00:12:07.212 "w_mbytes_per_sec": 0 00:12:07.212 }, 00:12:07.212 "claimed": false, 00:12:07.212 "zoned": false, 00:12:07.212 "supported_io_types": { 00:12:07.212 "read": true, 00:12:07.212 "write": true, 00:12:07.212 "unmap": false, 00:12:07.212 "flush": false, 00:12:07.212 "reset": true, 00:12:07.212 "nvme_admin": false, 00:12:07.212 "nvme_io": false, 00:12:07.212 "nvme_io_md": false, 00:12:07.212 "write_zeroes": true, 00:12:07.212 "zcopy": false, 00:12:07.212 "get_zone_info": false, 00:12:07.212 "zone_management": false, 00:12:07.212 "zone_append": false, 00:12:07.212 "compare": false, 00:12:07.212 "compare_and_write": false, 00:12:07.212 "abort": false, 00:12:07.212 "seek_hole": false, 00:12:07.212 "seek_data": false, 00:12:07.212 "copy": false, 00:12:07.212 "nvme_iov_md": false 00:12:07.212 }, 00:12:07.212 "memory_domains": [ 00:12:07.213 { 00:12:07.213 "dma_device_id": "system", 00:12:07.213 "dma_device_type": 1 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.213 "dma_device_type": 2 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "dma_device_id": "system", 00:12:07.213 "dma_device_type": 1 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.213 "dma_device_type": 2 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "dma_device_id": "system", 00:12:07.213 "dma_device_type": 1 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.213 "dma_device_type": 2 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "dma_device_id": "system", 00:12:07.213 "dma_device_type": 1 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.213 "dma_device_type": 2 00:12:07.213 } 00:12:07.213 ], 00:12:07.213 "driver_specific": { 00:12:07.213 "raid": { 00:12:07.213 "uuid": "5925e503-e74b-4d68-968b-1e56863c8a88", 00:12:07.213 "strip_size_kb": 0, 00:12:07.213 "state": "online", 00:12:07.213 "raid_level": "raid1", 00:12:07.213 "superblock": false, 00:12:07.213 "num_base_bdevs": 4, 00:12:07.213 "num_base_bdevs_discovered": 4, 00:12:07.213 "num_base_bdevs_operational": 4, 00:12:07.213 "base_bdevs_list": [ 00:12:07.213 { 00:12:07.213 "name": "BaseBdev1", 00:12:07.213 "uuid": "e8e62c1c-07e3-44c6-8488-bd5bd06f56c3", 00:12:07.213 "is_configured": true, 00:12:07.213 "data_offset": 0, 00:12:07.213 "data_size": 65536 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "name": "BaseBdev2", 00:12:07.213 "uuid": "31a26dfa-3b27-4119-84aa-60a8302b12e8", 00:12:07.213 "is_configured": true, 00:12:07.213 "data_offset": 0, 00:12:07.213 "data_size": 65536 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "name": "BaseBdev3", 00:12:07.213 "uuid": "8cb65f51-7c10-474f-a9d1-7af26913c390", 00:12:07.213 "is_configured": true, 00:12:07.213 "data_offset": 0, 00:12:07.213 "data_size": 65536 00:12:07.213 }, 00:12:07.213 { 00:12:07.213 "name": "BaseBdev4", 00:12:07.213 "uuid": "067c442d-a4d0-44bb-9742-56d4b26050c4", 00:12:07.213 "is_configured": true, 00:12:07.213 "data_offset": 0, 00:12:07.213 "data_size": 65536 00:12:07.213 } 00:12:07.213 ] 00:12:07.213 } 00:12:07.213 } 00:12:07.213 }' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:07.213 BaseBdev2 00:12:07.213 BaseBdev3 00:12:07.213 BaseBdev4' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.213 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.472 [2024-11-20 09:24:32.688870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.472 "name": "Existed_Raid", 00:12:07.472 "uuid": "5925e503-e74b-4d68-968b-1e56863c8a88", 00:12:07.472 "strip_size_kb": 0, 00:12:07.472 "state": "online", 00:12:07.472 "raid_level": "raid1", 00:12:07.472 "superblock": false, 00:12:07.472 "num_base_bdevs": 4, 00:12:07.472 "num_base_bdevs_discovered": 3, 00:12:07.472 "num_base_bdevs_operational": 3, 00:12:07.472 "base_bdevs_list": [ 00:12:07.472 { 00:12:07.472 "name": null, 00:12:07.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.472 "is_configured": false, 00:12:07.472 "data_offset": 0, 00:12:07.472 "data_size": 65536 00:12:07.472 }, 00:12:07.472 { 00:12:07.472 "name": "BaseBdev2", 00:12:07.472 "uuid": "31a26dfa-3b27-4119-84aa-60a8302b12e8", 00:12:07.472 "is_configured": true, 00:12:07.472 "data_offset": 0, 00:12:07.472 "data_size": 65536 00:12:07.472 }, 00:12:07.472 { 00:12:07.472 "name": "BaseBdev3", 00:12:07.472 "uuid": "8cb65f51-7c10-474f-a9d1-7af26913c390", 00:12:07.472 "is_configured": true, 00:12:07.472 "data_offset": 0, 00:12:07.472 "data_size": 65536 00:12:07.472 }, 00:12:07.472 { 00:12:07.472 "name": "BaseBdev4", 00:12:07.472 "uuid": "067c442d-a4d0-44bb-9742-56d4b26050c4", 00:12:07.472 "is_configured": true, 00:12:07.472 "data_offset": 0, 00:12:07.472 "data_size": 65536 00:12:07.472 } 00:12:07.472 ] 00:12:07.472 }' 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.472 09:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.040 [2024-11-20 09:24:33.329623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.040 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.300 [2024-11-20 09:24:33.508963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.300 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.300 [2024-11-20 09:24:33.681769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:08.300 [2024-11-20 09:24:33.681974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.559 [2024-11-20 09:24:33.793646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.559 [2024-11-20 09:24:33.793820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.559 [2024-11-20 09:24:33.793844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:08.559 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.560 BaseBdev2 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.560 [ 00:12:08.560 { 00:12:08.560 "name": "BaseBdev2", 00:12:08.560 "aliases": [ 00:12:08.560 "aafe9ed8-75a1-4795-9974-6625edbb6c6b" 00:12:08.560 ], 00:12:08.560 "product_name": "Malloc disk", 00:12:08.560 "block_size": 512, 00:12:08.560 "num_blocks": 65536, 00:12:08.560 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:08.560 "assigned_rate_limits": { 00:12:08.560 "rw_ios_per_sec": 0, 00:12:08.560 "rw_mbytes_per_sec": 0, 00:12:08.560 "r_mbytes_per_sec": 0, 00:12:08.560 "w_mbytes_per_sec": 0 00:12:08.560 }, 00:12:08.560 "claimed": false, 00:12:08.560 "zoned": false, 00:12:08.560 "supported_io_types": { 00:12:08.560 "read": true, 00:12:08.560 "write": true, 00:12:08.560 "unmap": true, 00:12:08.560 "flush": true, 00:12:08.560 "reset": true, 00:12:08.560 "nvme_admin": false, 00:12:08.560 "nvme_io": false, 00:12:08.560 "nvme_io_md": false, 00:12:08.560 "write_zeroes": true, 00:12:08.560 "zcopy": true, 00:12:08.560 "get_zone_info": false, 00:12:08.560 "zone_management": false, 00:12:08.560 "zone_append": false, 00:12:08.560 "compare": false, 00:12:08.560 "compare_and_write": false, 00:12:08.560 "abort": true, 00:12:08.560 "seek_hole": false, 00:12:08.560 "seek_data": false, 00:12:08.560 "copy": true, 00:12:08.560 "nvme_iov_md": false 00:12:08.560 }, 00:12:08.560 "memory_domains": [ 00:12:08.560 { 00:12:08.560 "dma_device_id": "system", 00:12:08.560 "dma_device_type": 1 00:12:08.560 }, 00:12:08.560 { 00:12:08.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.560 "dma_device_type": 2 00:12:08.560 } 00:12:08.560 ], 00:12:08.560 "driver_specific": {} 00:12:08.560 } 00:12:08.560 ] 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.560 BaseBdev3 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.560 09:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.560 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.560 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:08.560 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.560 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.820 [ 00:12:08.820 { 00:12:08.820 "name": "BaseBdev3", 00:12:08.820 "aliases": [ 00:12:08.820 "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94" 00:12:08.820 ], 00:12:08.820 "product_name": "Malloc disk", 00:12:08.820 "block_size": 512, 00:12:08.820 "num_blocks": 65536, 00:12:08.820 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:08.820 "assigned_rate_limits": { 00:12:08.820 "rw_ios_per_sec": 0, 00:12:08.820 "rw_mbytes_per_sec": 0, 00:12:08.820 "r_mbytes_per_sec": 0, 00:12:08.820 "w_mbytes_per_sec": 0 00:12:08.820 }, 00:12:08.820 "claimed": false, 00:12:08.820 "zoned": false, 00:12:08.820 "supported_io_types": { 00:12:08.820 "read": true, 00:12:08.820 "write": true, 00:12:08.820 "unmap": true, 00:12:08.820 "flush": true, 00:12:08.820 "reset": true, 00:12:08.820 "nvme_admin": false, 00:12:08.820 "nvme_io": false, 00:12:08.820 "nvme_io_md": false, 00:12:08.820 "write_zeroes": true, 00:12:08.820 "zcopy": true, 00:12:08.820 "get_zone_info": false, 00:12:08.821 "zone_management": false, 00:12:08.821 "zone_append": false, 00:12:08.821 "compare": false, 00:12:08.821 "compare_and_write": false, 00:12:08.821 "abort": true, 00:12:08.821 "seek_hole": false, 00:12:08.821 "seek_data": false, 00:12:08.821 "copy": true, 00:12:08.821 "nvme_iov_md": false 00:12:08.821 }, 00:12:08.821 "memory_domains": [ 00:12:08.821 { 00:12:08.821 "dma_device_id": "system", 00:12:08.821 "dma_device_type": 1 00:12:08.821 }, 00:12:08.821 { 00:12:08.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.821 "dma_device_type": 2 00:12:08.821 } 00:12:08.821 ], 00:12:08.821 "driver_specific": {} 00:12:08.821 } 00:12:08.821 ] 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.821 BaseBdev4 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.821 [ 00:12:08.821 { 00:12:08.821 "name": "BaseBdev4", 00:12:08.821 "aliases": [ 00:12:08.821 "9f904174-aa0e-4d8b-a228-ff04e09fad89" 00:12:08.821 ], 00:12:08.821 "product_name": "Malloc disk", 00:12:08.821 "block_size": 512, 00:12:08.821 "num_blocks": 65536, 00:12:08.821 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:08.821 "assigned_rate_limits": { 00:12:08.821 "rw_ios_per_sec": 0, 00:12:08.821 "rw_mbytes_per_sec": 0, 00:12:08.821 "r_mbytes_per_sec": 0, 00:12:08.821 "w_mbytes_per_sec": 0 00:12:08.821 }, 00:12:08.821 "claimed": false, 00:12:08.821 "zoned": false, 00:12:08.821 "supported_io_types": { 00:12:08.821 "read": true, 00:12:08.821 "write": true, 00:12:08.821 "unmap": true, 00:12:08.821 "flush": true, 00:12:08.821 "reset": true, 00:12:08.821 "nvme_admin": false, 00:12:08.821 "nvme_io": false, 00:12:08.821 "nvme_io_md": false, 00:12:08.821 "write_zeroes": true, 00:12:08.821 "zcopy": true, 00:12:08.821 "get_zone_info": false, 00:12:08.821 "zone_management": false, 00:12:08.821 "zone_append": false, 00:12:08.821 "compare": false, 00:12:08.821 "compare_and_write": false, 00:12:08.821 "abort": true, 00:12:08.821 "seek_hole": false, 00:12:08.821 "seek_data": false, 00:12:08.821 "copy": true, 00:12:08.821 "nvme_iov_md": false 00:12:08.821 }, 00:12:08.821 "memory_domains": [ 00:12:08.821 { 00:12:08.821 "dma_device_id": "system", 00:12:08.821 "dma_device_type": 1 00:12:08.821 }, 00:12:08.821 { 00:12:08.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.821 "dma_device_type": 2 00:12:08.821 } 00:12:08.821 ], 00:12:08.821 "driver_specific": {} 00:12:08.821 } 00:12:08.821 ] 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.821 [2024-11-20 09:24:34.125051] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.821 [2024-11-20 09:24:34.125183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.821 [2024-11-20 09:24:34.125243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.821 [2024-11-20 09:24:34.127454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.821 [2024-11-20 09:24:34.127597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.821 "name": "Existed_Raid", 00:12:08.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.821 "strip_size_kb": 0, 00:12:08.821 "state": "configuring", 00:12:08.821 "raid_level": "raid1", 00:12:08.821 "superblock": false, 00:12:08.821 "num_base_bdevs": 4, 00:12:08.821 "num_base_bdevs_discovered": 3, 00:12:08.821 "num_base_bdevs_operational": 4, 00:12:08.821 "base_bdevs_list": [ 00:12:08.821 { 00:12:08.821 "name": "BaseBdev1", 00:12:08.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.821 "is_configured": false, 00:12:08.821 "data_offset": 0, 00:12:08.821 "data_size": 0 00:12:08.821 }, 00:12:08.821 { 00:12:08.821 "name": "BaseBdev2", 00:12:08.821 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:08.821 "is_configured": true, 00:12:08.821 "data_offset": 0, 00:12:08.821 "data_size": 65536 00:12:08.821 }, 00:12:08.821 { 00:12:08.821 "name": "BaseBdev3", 00:12:08.821 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:08.821 "is_configured": true, 00:12:08.821 "data_offset": 0, 00:12:08.821 "data_size": 65536 00:12:08.821 }, 00:12:08.821 { 00:12:08.821 "name": "BaseBdev4", 00:12:08.821 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:08.821 "is_configured": true, 00:12:08.821 "data_offset": 0, 00:12:08.821 "data_size": 65536 00:12:08.821 } 00:12:08.821 ] 00:12:08.821 }' 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.821 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.390 [2024-11-20 09:24:34.612214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.390 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.390 "name": "Existed_Raid", 00:12:09.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.390 "strip_size_kb": 0, 00:12:09.390 "state": "configuring", 00:12:09.390 "raid_level": "raid1", 00:12:09.390 "superblock": false, 00:12:09.390 "num_base_bdevs": 4, 00:12:09.390 "num_base_bdevs_discovered": 2, 00:12:09.390 "num_base_bdevs_operational": 4, 00:12:09.390 "base_bdevs_list": [ 00:12:09.391 { 00:12:09.391 "name": "BaseBdev1", 00:12:09.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.391 "is_configured": false, 00:12:09.391 "data_offset": 0, 00:12:09.391 "data_size": 0 00:12:09.391 }, 00:12:09.391 { 00:12:09.391 "name": null, 00:12:09.391 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:09.391 "is_configured": false, 00:12:09.391 "data_offset": 0, 00:12:09.391 "data_size": 65536 00:12:09.391 }, 00:12:09.391 { 00:12:09.391 "name": "BaseBdev3", 00:12:09.391 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:09.391 "is_configured": true, 00:12:09.391 "data_offset": 0, 00:12:09.391 "data_size": 65536 00:12:09.391 }, 00:12:09.391 { 00:12:09.391 "name": "BaseBdev4", 00:12:09.391 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:09.391 "is_configured": true, 00:12:09.391 "data_offset": 0, 00:12:09.391 "data_size": 65536 00:12:09.391 } 00:12:09.391 ] 00:12:09.391 }' 00:12:09.391 09:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.391 09:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.650 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.650 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.650 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.651 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.911 [2024-11-20 09:24:35.187282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.911 BaseBdev1 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.911 [ 00:12:09.911 { 00:12:09.911 "name": "BaseBdev1", 00:12:09.911 "aliases": [ 00:12:09.911 "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe" 00:12:09.911 ], 00:12:09.911 "product_name": "Malloc disk", 00:12:09.911 "block_size": 512, 00:12:09.911 "num_blocks": 65536, 00:12:09.911 "uuid": "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe", 00:12:09.911 "assigned_rate_limits": { 00:12:09.911 "rw_ios_per_sec": 0, 00:12:09.911 "rw_mbytes_per_sec": 0, 00:12:09.911 "r_mbytes_per_sec": 0, 00:12:09.911 "w_mbytes_per_sec": 0 00:12:09.911 }, 00:12:09.911 "claimed": true, 00:12:09.911 "claim_type": "exclusive_write", 00:12:09.911 "zoned": false, 00:12:09.911 "supported_io_types": { 00:12:09.911 "read": true, 00:12:09.911 "write": true, 00:12:09.911 "unmap": true, 00:12:09.911 "flush": true, 00:12:09.911 "reset": true, 00:12:09.911 "nvme_admin": false, 00:12:09.911 "nvme_io": false, 00:12:09.911 "nvme_io_md": false, 00:12:09.911 "write_zeroes": true, 00:12:09.911 "zcopy": true, 00:12:09.911 "get_zone_info": false, 00:12:09.911 "zone_management": false, 00:12:09.911 "zone_append": false, 00:12:09.911 "compare": false, 00:12:09.911 "compare_and_write": false, 00:12:09.911 "abort": true, 00:12:09.911 "seek_hole": false, 00:12:09.911 "seek_data": false, 00:12:09.911 "copy": true, 00:12:09.911 "nvme_iov_md": false 00:12:09.911 }, 00:12:09.911 "memory_domains": [ 00:12:09.911 { 00:12:09.911 "dma_device_id": "system", 00:12:09.911 "dma_device_type": 1 00:12:09.911 }, 00:12:09.911 { 00:12:09.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.911 "dma_device_type": 2 00:12:09.911 } 00:12:09.911 ], 00:12:09.911 "driver_specific": {} 00:12:09.911 } 00:12:09.911 ] 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.911 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.911 "name": "Existed_Raid", 00:12:09.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.911 "strip_size_kb": 0, 00:12:09.911 "state": "configuring", 00:12:09.911 "raid_level": "raid1", 00:12:09.911 "superblock": false, 00:12:09.911 "num_base_bdevs": 4, 00:12:09.911 "num_base_bdevs_discovered": 3, 00:12:09.911 "num_base_bdevs_operational": 4, 00:12:09.911 "base_bdevs_list": [ 00:12:09.911 { 00:12:09.911 "name": "BaseBdev1", 00:12:09.912 "uuid": "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe", 00:12:09.912 "is_configured": true, 00:12:09.912 "data_offset": 0, 00:12:09.912 "data_size": 65536 00:12:09.912 }, 00:12:09.912 { 00:12:09.912 "name": null, 00:12:09.912 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:09.912 "is_configured": false, 00:12:09.912 "data_offset": 0, 00:12:09.912 "data_size": 65536 00:12:09.912 }, 00:12:09.912 { 00:12:09.912 "name": "BaseBdev3", 00:12:09.912 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:09.912 "is_configured": true, 00:12:09.912 "data_offset": 0, 00:12:09.912 "data_size": 65536 00:12:09.912 }, 00:12:09.912 { 00:12:09.912 "name": "BaseBdev4", 00:12:09.912 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:09.912 "is_configured": true, 00:12:09.912 "data_offset": 0, 00:12:09.912 "data_size": 65536 00:12:09.912 } 00:12:09.912 ] 00:12:09.912 }' 00:12:09.912 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.912 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.483 [2024-11-20 09:24:35.762530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.483 "name": "Existed_Raid", 00:12:10.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.483 "strip_size_kb": 0, 00:12:10.483 "state": "configuring", 00:12:10.483 "raid_level": "raid1", 00:12:10.483 "superblock": false, 00:12:10.483 "num_base_bdevs": 4, 00:12:10.483 "num_base_bdevs_discovered": 2, 00:12:10.483 "num_base_bdevs_operational": 4, 00:12:10.483 "base_bdevs_list": [ 00:12:10.483 { 00:12:10.483 "name": "BaseBdev1", 00:12:10.483 "uuid": "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe", 00:12:10.483 "is_configured": true, 00:12:10.483 "data_offset": 0, 00:12:10.483 "data_size": 65536 00:12:10.483 }, 00:12:10.483 { 00:12:10.483 "name": null, 00:12:10.483 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:10.483 "is_configured": false, 00:12:10.483 "data_offset": 0, 00:12:10.483 "data_size": 65536 00:12:10.483 }, 00:12:10.483 { 00:12:10.483 "name": null, 00:12:10.483 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:10.483 "is_configured": false, 00:12:10.483 "data_offset": 0, 00:12:10.483 "data_size": 65536 00:12:10.483 }, 00:12:10.483 { 00:12:10.483 "name": "BaseBdev4", 00:12:10.483 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:10.483 "is_configured": true, 00:12:10.483 "data_offset": 0, 00:12:10.483 "data_size": 65536 00:12:10.483 } 00:12:10.483 ] 00:12:10.483 }' 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.483 09:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.053 [2024-11-20 09:24:36.293686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.053 "name": "Existed_Raid", 00:12:11.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.053 "strip_size_kb": 0, 00:12:11.053 "state": "configuring", 00:12:11.053 "raid_level": "raid1", 00:12:11.053 "superblock": false, 00:12:11.053 "num_base_bdevs": 4, 00:12:11.053 "num_base_bdevs_discovered": 3, 00:12:11.053 "num_base_bdevs_operational": 4, 00:12:11.053 "base_bdevs_list": [ 00:12:11.053 { 00:12:11.053 "name": "BaseBdev1", 00:12:11.053 "uuid": "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe", 00:12:11.053 "is_configured": true, 00:12:11.053 "data_offset": 0, 00:12:11.053 "data_size": 65536 00:12:11.053 }, 00:12:11.053 { 00:12:11.053 "name": null, 00:12:11.053 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:11.053 "is_configured": false, 00:12:11.053 "data_offset": 0, 00:12:11.053 "data_size": 65536 00:12:11.053 }, 00:12:11.053 { 00:12:11.053 "name": "BaseBdev3", 00:12:11.053 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:11.053 "is_configured": true, 00:12:11.053 "data_offset": 0, 00:12:11.053 "data_size": 65536 00:12:11.053 }, 00:12:11.053 { 00:12:11.053 "name": "BaseBdev4", 00:12:11.053 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:11.053 "is_configured": true, 00:12:11.053 "data_offset": 0, 00:12:11.053 "data_size": 65536 00:12:11.053 } 00:12:11.053 ] 00:12:11.053 }' 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.053 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 [2024-11-20 09:24:36.840751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 09:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.623 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.623 "name": "Existed_Raid", 00:12:11.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.623 "strip_size_kb": 0, 00:12:11.623 "state": "configuring", 00:12:11.623 "raid_level": "raid1", 00:12:11.623 "superblock": false, 00:12:11.623 "num_base_bdevs": 4, 00:12:11.623 "num_base_bdevs_discovered": 2, 00:12:11.623 "num_base_bdevs_operational": 4, 00:12:11.623 "base_bdevs_list": [ 00:12:11.623 { 00:12:11.623 "name": null, 00:12:11.623 "uuid": "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe", 00:12:11.623 "is_configured": false, 00:12:11.623 "data_offset": 0, 00:12:11.623 "data_size": 65536 00:12:11.623 }, 00:12:11.623 { 00:12:11.623 "name": null, 00:12:11.623 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:11.623 "is_configured": false, 00:12:11.623 "data_offset": 0, 00:12:11.623 "data_size": 65536 00:12:11.623 }, 00:12:11.623 { 00:12:11.623 "name": "BaseBdev3", 00:12:11.623 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:11.623 "is_configured": true, 00:12:11.623 "data_offset": 0, 00:12:11.623 "data_size": 65536 00:12:11.623 }, 00:12:11.623 { 00:12:11.623 "name": "BaseBdev4", 00:12:11.624 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:11.624 "is_configured": true, 00:12:11.624 "data_offset": 0, 00:12:11.624 "data_size": 65536 00:12:11.624 } 00:12:11.624 ] 00:12:11.624 }' 00:12:11.624 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.624 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.254 [2024-11-20 09:24:37.510195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.254 "name": "Existed_Raid", 00:12:12.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.254 "strip_size_kb": 0, 00:12:12.254 "state": "configuring", 00:12:12.254 "raid_level": "raid1", 00:12:12.254 "superblock": false, 00:12:12.254 "num_base_bdevs": 4, 00:12:12.254 "num_base_bdevs_discovered": 3, 00:12:12.254 "num_base_bdevs_operational": 4, 00:12:12.254 "base_bdevs_list": [ 00:12:12.254 { 00:12:12.254 "name": null, 00:12:12.254 "uuid": "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe", 00:12:12.254 "is_configured": false, 00:12:12.254 "data_offset": 0, 00:12:12.254 "data_size": 65536 00:12:12.254 }, 00:12:12.254 { 00:12:12.254 "name": "BaseBdev2", 00:12:12.254 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:12.254 "is_configured": true, 00:12:12.254 "data_offset": 0, 00:12:12.254 "data_size": 65536 00:12:12.254 }, 00:12:12.254 { 00:12:12.254 "name": "BaseBdev3", 00:12:12.254 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:12.254 "is_configured": true, 00:12:12.254 "data_offset": 0, 00:12:12.254 "data_size": 65536 00:12:12.254 }, 00:12:12.254 { 00:12:12.254 "name": "BaseBdev4", 00:12:12.254 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:12.254 "is_configured": true, 00:12:12.254 "data_offset": 0, 00:12:12.254 "data_size": 65536 00:12:12.254 } 00:12:12.254 ] 00:12:12.254 }' 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.254 09:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7acc39c9-1cd2-4a93-92cf-231df5dbb1fe 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.823 [2024-11-20 09:24:38.169542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:12.823 [2024-11-20 09:24:38.169619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:12.823 [2024-11-20 09:24:38.169638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:12.823 [2024-11-20 09:24:38.170045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:12.823 [2024-11-20 09:24:38.170306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:12.823 [2024-11-20 09:24:38.170325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:12.823 [2024-11-20 09:24:38.170742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.823 NewBaseBdev 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.823 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.823 [ 00:12:12.823 { 00:12:12.823 "name": "NewBaseBdev", 00:12:12.823 "aliases": [ 00:12:12.823 "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe" 00:12:12.823 ], 00:12:12.823 "product_name": "Malloc disk", 00:12:12.823 "block_size": 512, 00:12:12.823 "num_blocks": 65536, 00:12:12.823 "uuid": "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe", 00:12:12.823 "assigned_rate_limits": { 00:12:12.823 "rw_ios_per_sec": 0, 00:12:12.823 "rw_mbytes_per_sec": 0, 00:12:12.823 "r_mbytes_per_sec": 0, 00:12:12.823 "w_mbytes_per_sec": 0 00:12:12.823 }, 00:12:12.823 "claimed": true, 00:12:12.823 "claim_type": "exclusive_write", 00:12:12.823 "zoned": false, 00:12:12.823 "supported_io_types": { 00:12:12.823 "read": true, 00:12:12.823 "write": true, 00:12:12.823 "unmap": true, 00:12:12.823 "flush": true, 00:12:12.823 "reset": true, 00:12:12.823 "nvme_admin": false, 00:12:12.823 "nvme_io": false, 00:12:12.823 "nvme_io_md": false, 00:12:12.823 "write_zeroes": true, 00:12:12.823 "zcopy": true, 00:12:12.823 "get_zone_info": false, 00:12:12.823 "zone_management": false, 00:12:12.823 "zone_append": false, 00:12:12.823 "compare": false, 00:12:12.823 "compare_and_write": false, 00:12:12.823 "abort": true, 00:12:12.823 "seek_hole": false, 00:12:12.823 "seek_data": false, 00:12:12.823 "copy": true, 00:12:12.823 "nvme_iov_md": false 00:12:12.823 }, 00:12:12.823 "memory_domains": [ 00:12:12.823 { 00:12:12.823 "dma_device_id": "system", 00:12:12.824 "dma_device_type": 1 00:12:12.824 }, 00:12:12.824 { 00:12:12.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.824 "dma_device_type": 2 00:12:12.824 } 00:12:12.824 ], 00:12:12.824 "driver_specific": {} 00:12:12.824 } 00:12:12.824 ] 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.824 "name": "Existed_Raid", 00:12:12.824 "uuid": "f177f167-2c43-4d68-a477-d34171a615ed", 00:12:12.824 "strip_size_kb": 0, 00:12:12.824 "state": "online", 00:12:12.824 "raid_level": "raid1", 00:12:12.824 "superblock": false, 00:12:12.824 "num_base_bdevs": 4, 00:12:12.824 "num_base_bdevs_discovered": 4, 00:12:12.824 "num_base_bdevs_operational": 4, 00:12:12.824 "base_bdevs_list": [ 00:12:12.824 { 00:12:12.824 "name": "NewBaseBdev", 00:12:12.824 "uuid": "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe", 00:12:12.824 "is_configured": true, 00:12:12.824 "data_offset": 0, 00:12:12.824 "data_size": 65536 00:12:12.824 }, 00:12:12.824 { 00:12:12.824 "name": "BaseBdev2", 00:12:12.824 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:12.824 "is_configured": true, 00:12:12.824 "data_offset": 0, 00:12:12.824 "data_size": 65536 00:12:12.824 }, 00:12:12.824 { 00:12:12.824 "name": "BaseBdev3", 00:12:12.824 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:12.824 "is_configured": true, 00:12:12.824 "data_offset": 0, 00:12:12.824 "data_size": 65536 00:12:12.824 }, 00:12:12.824 { 00:12:12.824 "name": "BaseBdev4", 00:12:12.824 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:12.824 "is_configured": true, 00:12:12.824 "data_offset": 0, 00:12:12.824 "data_size": 65536 00:12:12.824 } 00:12:12.824 ] 00:12:12.824 }' 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.824 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.393 [2024-11-20 09:24:38.673073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.393 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:13.393 "name": "Existed_Raid", 00:12:13.393 "aliases": [ 00:12:13.393 "f177f167-2c43-4d68-a477-d34171a615ed" 00:12:13.393 ], 00:12:13.393 "product_name": "Raid Volume", 00:12:13.393 "block_size": 512, 00:12:13.393 "num_blocks": 65536, 00:12:13.393 "uuid": "f177f167-2c43-4d68-a477-d34171a615ed", 00:12:13.393 "assigned_rate_limits": { 00:12:13.393 "rw_ios_per_sec": 0, 00:12:13.393 "rw_mbytes_per_sec": 0, 00:12:13.393 "r_mbytes_per_sec": 0, 00:12:13.393 "w_mbytes_per_sec": 0 00:12:13.393 }, 00:12:13.393 "claimed": false, 00:12:13.393 "zoned": false, 00:12:13.393 "supported_io_types": { 00:12:13.393 "read": true, 00:12:13.393 "write": true, 00:12:13.393 "unmap": false, 00:12:13.393 "flush": false, 00:12:13.393 "reset": true, 00:12:13.393 "nvme_admin": false, 00:12:13.393 "nvme_io": false, 00:12:13.393 "nvme_io_md": false, 00:12:13.393 "write_zeroes": true, 00:12:13.393 "zcopy": false, 00:12:13.393 "get_zone_info": false, 00:12:13.393 "zone_management": false, 00:12:13.393 "zone_append": false, 00:12:13.393 "compare": false, 00:12:13.393 "compare_and_write": false, 00:12:13.393 "abort": false, 00:12:13.393 "seek_hole": false, 00:12:13.393 "seek_data": false, 00:12:13.393 "copy": false, 00:12:13.393 "nvme_iov_md": false 00:12:13.393 }, 00:12:13.394 "memory_domains": [ 00:12:13.394 { 00:12:13.394 "dma_device_id": "system", 00:12:13.394 "dma_device_type": 1 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.394 "dma_device_type": 2 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "dma_device_id": "system", 00:12:13.394 "dma_device_type": 1 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.394 "dma_device_type": 2 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "dma_device_id": "system", 00:12:13.394 "dma_device_type": 1 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.394 "dma_device_type": 2 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "dma_device_id": "system", 00:12:13.394 "dma_device_type": 1 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.394 "dma_device_type": 2 00:12:13.394 } 00:12:13.394 ], 00:12:13.394 "driver_specific": { 00:12:13.394 "raid": { 00:12:13.394 "uuid": "f177f167-2c43-4d68-a477-d34171a615ed", 00:12:13.394 "strip_size_kb": 0, 00:12:13.394 "state": "online", 00:12:13.394 "raid_level": "raid1", 00:12:13.394 "superblock": false, 00:12:13.394 "num_base_bdevs": 4, 00:12:13.394 "num_base_bdevs_discovered": 4, 00:12:13.394 "num_base_bdevs_operational": 4, 00:12:13.394 "base_bdevs_list": [ 00:12:13.394 { 00:12:13.394 "name": "NewBaseBdev", 00:12:13.394 "uuid": "7acc39c9-1cd2-4a93-92cf-231df5dbb1fe", 00:12:13.394 "is_configured": true, 00:12:13.394 "data_offset": 0, 00:12:13.394 "data_size": 65536 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "name": "BaseBdev2", 00:12:13.394 "uuid": "aafe9ed8-75a1-4795-9974-6625edbb6c6b", 00:12:13.394 "is_configured": true, 00:12:13.394 "data_offset": 0, 00:12:13.394 "data_size": 65536 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "name": "BaseBdev3", 00:12:13.394 "uuid": "58b3d4a8-e05c-4ed9-a4b3-24e6ba31fd94", 00:12:13.394 "is_configured": true, 00:12:13.394 "data_offset": 0, 00:12:13.394 "data_size": 65536 00:12:13.394 }, 00:12:13.394 { 00:12:13.394 "name": "BaseBdev4", 00:12:13.394 "uuid": "9f904174-aa0e-4d8b-a228-ff04e09fad89", 00:12:13.394 "is_configured": true, 00:12:13.394 "data_offset": 0, 00:12:13.394 "data_size": 65536 00:12:13.394 } 00:12:13.394 ] 00:12:13.394 } 00:12:13.394 } 00:12:13.394 }' 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:13.394 BaseBdev2 00:12:13.394 BaseBdev3 00:12:13.394 BaseBdev4' 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.394 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.654 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.654 [2024-11-20 09:24:38.992168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.654 [2024-11-20 09:24:38.992200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.654 [2024-11-20 09:24:38.992294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.655 [2024-11-20 09:24:38.992632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.655 [2024-11-20 09:24:38.992648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:13.655 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.655 09:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73546 00:12:13.655 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73546 ']' 00:12:13.655 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73546 00:12:13.655 09:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:13.655 09:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.655 09:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73546 00:12:13.655 09:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.655 killing process with pid 73546 00:12:13.655 09:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.655 09:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73546' 00:12:13.655 09:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73546 00:12:13.655 [2024-11-20 09:24:39.040271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.655 09:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73546 00:12:14.223 [2024-11-20 09:24:39.459234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:15.605 00:12:15.605 real 0m12.603s 00:12:15.605 user 0m19.925s 00:12:15.605 sys 0m2.239s 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.605 ************************************ 00:12:15.605 END TEST raid_state_function_test 00:12:15.605 ************************************ 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.605 09:24:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:15.605 09:24:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:15.605 09:24:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.605 09:24:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:15.605 ************************************ 00:12:15.605 START TEST raid_state_function_test_sb 00:12:15.605 ************************************ 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74224 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74224' 00:12:15.605 Process raid pid: 74224 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74224 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74224 ']' 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.605 09:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.605 [2024-11-20 09:24:40.918811] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:15.605 [2024-11-20 09:24:40.919092] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.865 [2024-11-20 09:24:41.091128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.865 [2024-11-20 09:24:41.228585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.125 [2024-11-20 09:24:41.474045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.125 [2024-11-20 09:24:41.474095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.386 09:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.386 09:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:16.386 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:16.386 09:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.386 09:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.386 [2024-11-20 09:24:41.837499] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.386 [2024-11-20 09:24:41.837566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.386 [2024-11-20 09:24:41.837578] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.386 [2024-11-20 09:24:41.837590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.386 [2024-11-20 09:24:41.837598] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.386 [2024-11-20 09:24:41.837608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.386 [2024-11-20 09:24:41.837622] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:16.386 [2024-11-20 09:24:41.837633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.646 "name": "Existed_Raid", 00:12:16.646 "uuid": "79b0c087-7367-49bb-902f-8e85cc3c158c", 00:12:16.646 "strip_size_kb": 0, 00:12:16.646 "state": "configuring", 00:12:16.646 "raid_level": "raid1", 00:12:16.646 "superblock": true, 00:12:16.646 "num_base_bdevs": 4, 00:12:16.646 "num_base_bdevs_discovered": 0, 00:12:16.646 "num_base_bdevs_operational": 4, 00:12:16.646 "base_bdevs_list": [ 00:12:16.646 { 00:12:16.646 "name": "BaseBdev1", 00:12:16.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.646 "is_configured": false, 00:12:16.646 "data_offset": 0, 00:12:16.646 "data_size": 0 00:12:16.646 }, 00:12:16.646 { 00:12:16.646 "name": "BaseBdev2", 00:12:16.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.646 "is_configured": false, 00:12:16.646 "data_offset": 0, 00:12:16.646 "data_size": 0 00:12:16.646 }, 00:12:16.646 { 00:12:16.646 "name": "BaseBdev3", 00:12:16.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.646 "is_configured": false, 00:12:16.646 "data_offset": 0, 00:12:16.646 "data_size": 0 00:12:16.646 }, 00:12:16.646 { 00:12:16.646 "name": "BaseBdev4", 00:12:16.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.646 "is_configured": false, 00:12:16.646 "data_offset": 0, 00:12:16.646 "data_size": 0 00:12:16.646 } 00:12:16.646 ] 00:12:16.646 }' 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.646 09:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.906 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.906 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.906 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 [2024-11-20 09:24:42.360536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.166 [2024-11-20 09:24:42.360655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 [2024-11-20 09:24:42.372529] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.166 [2024-11-20 09:24:42.372581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.166 [2024-11-20 09:24:42.372592] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.166 [2024-11-20 09:24:42.372603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.166 [2024-11-20 09:24:42.372611] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.166 [2024-11-20 09:24:42.372622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.166 [2024-11-20 09:24:42.372629] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:17.166 [2024-11-20 09:24:42.372640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 [2024-11-20 09:24:42.427886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.166 BaseBdev1 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.166 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 [ 00:12:17.166 { 00:12:17.166 "name": "BaseBdev1", 00:12:17.166 "aliases": [ 00:12:17.166 "56079d8c-55a1-415d-9a6c-e792c1848fd5" 00:12:17.166 ], 00:12:17.166 "product_name": "Malloc disk", 00:12:17.166 "block_size": 512, 00:12:17.166 "num_blocks": 65536, 00:12:17.166 "uuid": "56079d8c-55a1-415d-9a6c-e792c1848fd5", 00:12:17.166 "assigned_rate_limits": { 00:12:17.166 "rw_ios_per_sec": 0, 00:12:17.166 "rw_mbytes_per_sec": 0, 00:12:17.167 "r_mbytes_per_sec": 0, 00:12:17.167 "w_mbytes_per_sec": 0 00:12:17.167 }, 00:12:17.167 "claimed": true, 00:12:17.167 "claim_type": "exclusive_write", 00:12:17.167 "zoned": false, 00:12:17.167 "supported_io_types": { 00:12:17.167 "read": true, 00:12:17.167 "write": true, 00:12:17.167 "unmap": true, 00:12:17.167 "flush": true, 00:12:17.167 "reset": true, 00:12:17.167 "nvme_admin": false, 00:12:17.167 "nvme_io": false, 00:12:17.167 "nvme_io_md": false, 00:12:17.167 "write_zeroes": true, 00:12:17.167 "zcopy": true, 00:12:17.167 "get_zone_info": false, 00:12:17.167 "zone_management": false, 00:12:17.167 "zone_append": false, 00:12:17.167 "compare": false, 00:12:17.167 "compare_and_write": false, 00:12:17.167 "abort": true, 00:12:17.167 "seek_hole": false, 00:12:17.167 "seek_data": false, 00:12:17.167 "copy": true, 00:12:17.167 "nvme_iov_md": false 00:12:17.167 }, 00:12:17.167 "memory_domains": [ 00:12:17.167 { 00:12:17.167 "dma_device_id": "system", 00:12:17.167 "dma_device_type": 1 00:12:17.167 }, 00:12:17.167 { 00:12:17.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.167 "dma_device_type": 2 00:12:17.167 } 00:12:17.167 ], 00:12:17.167 "driver_specific": {} 00:12:17.167 } 00:12:17.167 ] 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.167 "name": "Existed_Raid", 00:12:17.167 "uuid": "dc620796-7c35-4e17-988f-c9638088bd04", 00:12:17.167 "strip_size_kb": 0, 00:12:17.167 "state": "configuring", 00:12:17.167 "raid_level": "raid1", 00:12:17.167 "superblock": true, 00:12:17.167 "num_base_bdevs": 4, 00:12:17.167 "num_base_bdevs_discovered": 1, 00:12:17.167 "num_base_bdevs_operational": 4, 00:12:17.167 "base_bdevs_list": [ 00:12:17.167 { 00:12:17.167 "name": "BaseBdev1", 00:12:17.167 "uuid": "56079d8c-55a1-415d-9a6c-e792c1848fd5", 00:12:17.167 "is_configured": true, 00:12:17.167 "data_offset": 2048, 00:12:17.167 "data_size": 63488 00:12:17.167 }, 00:12:17.167 { 00:12:17.167 "name": "BaseBdev2", 00:12:17.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.167 "is_configured": false, 00:12:17.167 "data_offset": 0, 00:12:17.167 "data_size": 0 00:12:17.167 }, 00:12:17.167 { 00:12:17.167 "name": "BaseBdev3", 00:12:17.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.167 "is_configured": false, 00:12:17.167 "data_offset": 0, 00:12:17.167 "data_size": 0 00:12:17.167 }, 00:12:17.167 { 00:12:17.167 "name": "BaseBdev4", 00:12:17.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.167 "is_configured": false, 00:12:17.167 "data_offset": 0, 00:12:17.167 "data_size": 0 00:12:17.167 } 00:12:17.167 ] 00:12:17.167 }' 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.167 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.737 [2024-11-20 09:24:42.975586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.737 [2024-11-20 09:24:42.975720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.737 [2024-11-20 09:24:42.987634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.737 [2024-11-20 09:24:42.989866] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.737 [2024-11-20 09:24:42.989964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.737 [2024-11-20 09:24:42.990019] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.737 [2024-11-20 09:24:42.990074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.737 [2024-11-20 09:24:42.990122] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:17.737 [2024-11-20 09:24:42.990159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.737 09:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.737 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.737 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.737 "name": "Existed_Raid", 00:12:17.737 "uuid": "7c0db380-516a-43ca-be3b-647ec31d2037", 00:12:17.737 "strip_size_kb": 0, 00:12:17.737 "state": "configuring", 00:12:17.737 "raid_level": "raid1", 00:12:17.737 "superblock": true, 00:12:17.737 "num_base_bdevs": 4, 00:12:17.737 "num_base_bdevs_discovered": 1, 00:12:17.737 "num_base_bdevs_operational": 4, 00:12:17.737 "base_bdevs_list": [ 00:12:17.737 { 00:12:17.737 "name": "BaseBdev1", 00:12:17.737 "uuid": "56079d8c-55a1-415d-9a6c-e792c1848fd5", 00:12:17.737 "is_configured": true, 00:12:17.737 "data_offset": 2048, 00:12:17.737 "data_size": 63488 00:12:17.737 }, 00:12:17.737 { 00:12:17.737 "name": "BaseBdev2", 00:12:17.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.737 "is_configured": false, 00:12:17.737 "data_offset": 0, 00:12:17.737 "data_size": 0 00:12:17.737 }, 00:12:17.737 { 00:12:17.737 "name": "BaseBdev3", 00:12:17.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.737 "is_configured": false, 00:12:17.737 "data_offset": 0, 00:12:17.737 "data_size": 0 00:12:17.737 }, 00:12:17.737 { 00:12:17.737 "name": "BaseBdev4", 00:12:17.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.737 "is_configured": false, 00:12:17.737 "data_offset": 0, 00:12:17.737 "data_size": 0 00:12:17.737 } 00:12:17.737 ] 00:12:17.737 }' 00:12:17.737 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.737 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.308 [2024-11-20 09:24:43.520286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.308 BaseBdev2 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.308 [ 00:12:18.308 { 00:12:18.308 "name": "BaseBdev2", 00:12:18.308 "aliases": [ 00:12:18.308 "891415c9-1d39-43d0-a727-fe53807bfe4e" 00:12:18.308 ], 00:12:18.308 "product_name": "Malloc disk", 00:12:18.308 "block_size": 512, 00:12:18.308 "num_blocks": 65536, 00:12:18.308 "uuid": "891415c9-1d39-43d0-a727-fe53807bfe4e", 00:12:18.308 "assigned_rate_limits": { 00:12:18.308 "rw_ios_per_sec": 0, 00:12:18.308 "rw_mbytes_per_sec": 0, 00:12:18.308 "r_mbytes_per_sec": 0, 00:12:18.308 "w_mbytes_per_sec": 0 00:12:18.308 }, 00:12:18.308 "claimed": true, 00:12:18.308 "claim_type": "exclusive_write", 00:12:18.308 "zoned": false, 00:12:18.308 "supported_io_types": { 00:12:18.308 "read": true, 00:12:18.308 "write": true, 00:12:18.308 "unmap": true, 00:12:18.308 "flush": true, 00:12:18.308 "reset": true, 00:12:18.308 "nvme_admin": false, 00:12:18.308 "nvme_io": false, 00:12:18.308 "nvme_io_md": false, 00:12:18.308 "write_zeroes": true, 00:12:18.308 "zcopy": true, 00:12:18.308 "get_zone_info": false, 00:12:18.308 "zone_management": false, 00:12:18.308 "zone_append": false, 00:12:18.308 "compare": false, 00:12:18.308 "compare_and_write": false, 00:12:18.308 "abort": true, 00:12:18.308 "seek_hole": false, 00:12:18.308 "seek_data": false, 00:12:18.308 "copy": true, 00:12:18.308 "nvme_iov_md": false 00:12:18.308 }, 00:12:18.308 "memory_domains": [ 00:12:18.308 { 00:12:18.308 "dma_device_id": "system", 00:12:18.308 "dma_device_type": 1 00:12:18.308 }, 00:12:18.308 { 00:12:18.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.308 "dma_device_type": 2 00:12:18.308 } 00:12:18.308 ], 00:12:18.308 "driver_specific": {} 00:12:18.308 } 00:12:18.308 ] 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.308 "name": "Existed_Raid", 00:12:18.308 "uuid": "7c0db380-516a-43ca-be3b-647ec31d2037", 00:12:18.308 "strip_size_kb": 0, 00:12:18.308 "state": "configuring", 00:12:18.308 "raid_level": "raid1", 00:12:18.308 "superblock": true, 00:12:18.308 "num_base_bdevs": 4, 00:12:18.308 "num_base_bdevs_discovered": 2, 00:12:18.308 "num_base_bdevs_operational": 4, 00:12:18.308 "base_bdevs_list": [ 00:12:18.308 { 00:12:18.308 "name": "BaseBdev1", 00:12:18.308 "uuid": "56079d8c-55a1-415d-9a6c-e792c1848fd5", 00:12:18.308 "is_configured": true, 00:12:18.308 "data_offset": 2048, 00:12:18.308 "data_size": 63488 00:12:18.308 }, 00:12:18.308 { 00:12:18.308 "name": "BaseBdev2", 00:12:18.308 "uuid": "891415c9-1d39-43d0-a727-fe53807bfe4e", 00:12:18.308 "is_configured": true, 00:12:18.308 "data_offset": 2048, 00:12:18.308 "data_size": 63488 00:12:18.308 }, 00:12:18.308 { 00:12:18.308 "name": "BaseBdev3", 00:12:18.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.308 "is_configured": false, 00:12:18.308 "data_offset": 0, 00:12:18.308 "data_size": 0 00:12:18.308 }, 00:12:18.308 { 00:12:18.308 "name": "BaseBdev4", 00:12:18.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.308 "is_configured": false, 00:12:18.308 "data_offset": 0, 00:12:18.308 "data_size": 0 00:12:18.308 } 00:12:18.308 ] 00:12:18.308 }' 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.308 09:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.881 [2024-11-20 09:24:44.137221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.881 BaseBdev3 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.881 [ 00:12:18.881 { 00:12:18.881 "name": "BaseBdev3", 00:12:18.881 "aliases": [ 00:12:18.881 "514233c5-a98c-4cb7-8909-ee16a0686c16" 00:12:18.881 ], 00:12:18.881 "product_name": "Malloc disk", 00:12:18.881 "block_size": 512, 00:12:18.881 "num_blocks": 65536, 00:12:18.881 "uuid": "514233c5-a98c-4cb7-8909-ee16a0686c16", 00:12:18.881 "assigned_rate_limits": { 00:12:18.881 "rw_ios_per_sec": 0, 00:12:18.881 "rw_mbytes_per_sec": 0, 00:12:18.881 "r_mbytes_per_sec": 0, 00:12:18.881 "w_mbytes_per_sec": 0 00:12:18.881 }, 00:12:18.881 "claimed": true, 00:12:18.881 "claim_type": "exclusive_write", 00:12:18.881 "zoned": false, 00:12:18.881 "supported_io_types": { 00:12:18.881 "read": true, 00:12:18.881 "write": true, 00:12:18.881 "unmap": true, 00:12:18.881 "flush": true, 00:12:18.881 "reset": true, 00:12:18.881 "nvme_admin": false, 00:12:18.881 "nvme_io": false, 00:12:18.881 "nvme_io_md": false, 00:12:18.881 "write_zeroes": true, 00:12:18.881 "zcopy": true, 00:12:18.881 "get_zone_info": false, 00:12:18.881 "zone_management": false, 00:12:18.881 "zone_append": false, 00:12:18.881 "compare": false, 00:12:18.881 "compare_and_write": false, 00:12:18.881 "abort": true, 00:12:18.881 "seek_hole": false, 00:12:18.881 "seek_data": false, 00:12:18.881 "copy": true, 00:12:18.881 "nvme_iov_md": false 00:12:18.881 }, 00:12:18.881 "memory_domains": [ 00:12:18.881 { 00:12:18.881 "dma_device_id": "system", 00:12:18.881 "dma_device_type": 1 00:12:18.881 }, 00:12:18.881 { 00:12:18.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.881 "dma_device_type": 2 00:12:18.881 } 00:12:18.881 ], 00:12:18.881 "driver_specific": {} 00:12:18.881 } 00:12:18.881 ] 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.881 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.881 "name": "Existed_Raid", 00:12:18.881 "uuid": "7c0db380-516a-43ca-be3b-647ec31d2037", 00:12:18.881 "strip_size_kb": 0, 00:12:18.881 "state": "configuring", 00:12:18.881 "raid_level": "raid1", 00:12:18.881 "superblock": true, 00:12:18.881 "num_base_bdevs": 4, 00:12:18.881 "num_base_bdevs_discovered": 3, 00:12:18.881 "num_base_bdevs_operational": 4, 00:12:18.881 "base_bdevs_list": [ 00:12:18.881 { 00:12:18.881 "name": "BaseBdev1", 00:12:18.881 "uuid": "56079d8c-55a1-415d-9a6c-e792c1848fd5", 00:12:18.881 "is_configured": true, 00:12:18.881 "data_offset": 2048, 00:12:18.881 "data_size": 63488 00:12:18.881 }, 00:12:18.881 { 00:12:18.882 "name": "BaseBdev2", 00:12:18.882 "uuid": "891415c9-1d39-43d0-a727-fe53807bfe4e", 00:12:18.882 "is_configured": true, 00:12:18.882 "data_offset": 2048, 00:12:18.882 "data_size": 63488 00:12:18.882 }, 00:12:18.882 { 00:12:18.882 "name": "BaseBdev3", 00:12:18.882 "uuid": "514233c5-a98c-4cb7-8909-ee16a0686c16", 00:12:18.882 "is_configured": true, 00:12:18.882 "data_offset": 2048, 00:12:18.882 "data_size": 63488 00:12:18.882 }, 00:12:18.882 { 00:12:18.882 "name": "BaseBdev4", 00:12:18.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.882 "is_configured": false, 00:12:18.882 "data_offset": 0, 00:12:18.882 "data_size": 0 00:12:18.882 } 00:12:18.882 ] 00:12:18.882 }' 00:12:18.882 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.882 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.459 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:19.459 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.459 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.459 [2024-11-20 09:24:44.721359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:19.459 [2024-11-20 09:24:44.721778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:19.459 [2024-11-20 09:24:44.721798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.460 [2024-11-20 09:24:44.722120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:19.460 [2024-11-20 09:24:44.722307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:19.460 [2024-11-20 09:24:44.722323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:19.460 BaseBdev4 00:12:19.460 [2024-11-20 09:24:44.722521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.460 [ 00:12:19.460 { 00:12:19.460 "name": "BaseBdev4", 00:12:19.460 "aliases": [ 00:12:19.460 "72239d1e-d365-4cb6-b9b9-e399d14ef006" 00:12:19.460 ], 00:12:19.460 "product_name": "Malloc disk", 00:12:19.460 "block_size": 512, 00:12:19.460 "num_blocks": 65536, 00:12:19.460 "uuid": "72239d1e-d365-4cb6-b9b9-e399d14ef006", 00:12:19.460 "assigned_rate_limits": { 00:12:19.460 "rw_ios_per_sec": 0, 00:12:19.460 "rw_mbytes_per_sec": 0, 00:12:19.460 "r_mbytes_per_sec": 0, 00:12:19.460 "w_mbytes_per_sec": 0 00:12:19.460 }, 00:12:19.460 "claimed": true, 00:12:19.460 "claim_type": "exclusive_write", 00:12:19.460 "zoned": false, 00:12:19.460 "supported_io_types": { 00:12:19.460 "read": true, 00:12:19.460 "write": true, 00:12:19.460 "unmap": true, 00:12:19.460 "flush": true, 00:12:19.460 "reset": true, 00:12:19.460 "nvme_admin": false, 00:12:19.460 "nvme_io": false, 00:12:19.460 "nvme_io_md": false, 00:12:19.460 "write_zeroes": true, 00:12:19.460 "zcopy": true, 00:12:19.460 "get_zone_info": false, 00:12:19.460 "zone_management": false, 00:12:19.460 "zone_append": false, 00:12:19.460 "compare": false, 00:12:19.460 "compare_and_write": false, 00:12:19.460 "abort": true, 00:12:19.460 "seek_hole": false, 00:12:19.460 "seek_data": false, 00:12:19.460 "copy": true, 00:12:19.460 "nvme_iov_md": false 00:12:19.460 }, 00:12:19.460 "memory_domains": [ 00:12:19.460 { 00:12:19.460 "dma_device_id": "system", 00:12:19.460 "dma_device_type": 1 00:12:19.460 }, 00:12:19.460 { 00:12:19.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.460 "dma_device_type": 2 00:12:19.460 } 00:12:19.460 ], 00:12:19.460 "driver_specific": {} 00:12:19.460 } 00:12:19.460 ] 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.460 "name": "Existed_Raid", 00:12:19.460 "uuid": "7c0db380-516a-43ca-be3b-647ec31d2037", 00:12:19.460 "strip_size_kb": 0, 00:12:19.460 "state": "online", 00:12:19.460 "raid_level": "raid1", 00:12:19.460 "superblock": true, 00:12:19.460 "num_base_bdevs": 4, 00:12:19.460 "num_base_bdevs_discovered": 4, 00:12:19.460 "num_base_bdevs_operational": 4, 00:12:19.460 "base_bdevs_list": [ 00:12:19.460 { 00:12:19.460 "name": "BaseBdev1", 00:12:19.460 "uuid": "56079d8c-55a1-415d-9a6c-e792c1848fd5", 00:12:19.460 "is_configured": true, 00:12:19.460 "data_offset": 2048, 00:12:19.460 "data_size": 63488 00:12:19.460 }, 00:12:19.460 { 00:12:19.460 "name": "BaseBdev2", 00:12:19.460 "uuid": "891415c9-1d39-43d0-a727-fe53807bfe4e", 00:12:19.460 "is_configured": true, 00:12:19.460 "data_offset": 2048, 00:12:19.460 "data_size": 63488 00:12:19.460 }, 00:12:19.460 { 00:12:19.460 "name": "BaseBdev3", 00:12:19.460 "uuid": "514233c5-a98c-4cb7-8909-ee16a0686c16", 00:12:19.460 "is_configured": true, 00:12:19.460 "data_offset": 2048, 00:12:19.460 "data_size": 63488 00:12:19.460 }, 00:12:19.460 { 00:12:19.460 "name": "BaseBdev4", 00:12:19.460 "uuid": "72239d1e-d365-4cb6-b9b9-e399d14ef006", 00:12:19.460 "is_configured": true, 00:12:19.460 "data_offset": 2048, 00:12:19.460 "data_size": 63488 00:12:19.460 } 00:12:19.460 ] 00:12:19.460 }' 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.460 09:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.029 [2024-11-20 09:24:45.228980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.029 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:20.029 "name": "Existed_Raid", 00:12:20.029 "aliases": [ 00:12:20.029 "7c0db380-516a-43ca-be3b-647ec31d2037" 00:12:20.029 ], 00:12:20.029 "product_name": "Raid Volume", 00:12:20.029 "block_size": 512, 00:12:20.029 "num_blocks": 63488, 00:12:20.029 "uuid": "7c0db380-516a-43ca-be3b-647ec31d2037", 00:12:20.029 "assigned_rate_limits": { 00:12:20.029 "rw_ios_per_sec": 0, 00:12:20.029 "rw_mbytes_per_sec": 0, 00:12:20.029 "r_mbytes_per_sec": 0, 00:12:20.029 "w_mbytes_per_sec": 0 00:12:20.029 }, 00:12:20.029 "claimed": false, 00:12:20.029 "zoned": false, 00:12:20.029 "supported_io_types": { 00:12:20.029 "read": true, 00:12:20.029 "write": true, 00:12:20.029 "unmap": false, 00:12:20.029 "flush": false, 00:12:20.029 "reset": true, 00:12:20.029 "nvme_admin": false, 00:12:20.029 "nvme_io": false, 00:12:20.029 "nvme_io_md": false, 00:12:20.029 "write_zeroes": true, 00:12:20.029 "zcopy": false, 00:12:20.029 "get_zone_info": false, 00:12:20.029 "zone_management": false, 00:12:20.029 "zone_append": false, 00:12:20.029 "compare": false, 00:12:20.029 "compare_and_write": false, 00:12:20.029 "abort": false, 00:12:20.029 "seek_hole": false, 00:12:20.029 "seek_data": false, 00:12:20.029 "copy": false, 00:12:20.029 "nvme_iov_md": false 00:12:20.029 }, 00:12:20.029 "memory_domains": [ 00:12:20.029 { 00:12:20.029 "dma_device_id": "system", 00:12:20.029 "dma_device_type": 1 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.029 "dma_device_type": 2 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "dma_device_id": "system", 00:12:20.029 "dma_device_type": 1 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.029 "dma_device_type": 2 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "dma_device_id": "system", 00:12:20.029 "dma_device_type": 1 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.029 "dma_device_type": 2 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "dma_device_id": "system", 00:12:20.029 "dma_device_type": 1 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.029 "dma_device_type": 2 00:12:20.029 } 00:12:20.029 ], 00:12:20.029 "driver_specific": { 00:12:20.029 "raid": { 00:12:20.029 "uuid": "7c0db380-516a-43ca-be3b-647ec31d2037", 00:12:20.029 "strip_size_kb": 0, 00:12:20.029 "state": "online", 00:12:20.029 "raid_level": "raid1", 00:12:20.029 "superblock": true, 00:12:20.029 "num_base_bdevs": 4, 00:12:20.029 "num_base_bdevs_discovered": 4, 00:12:20.029 "num_base_bdevs_operational": 4, 00:12:20.029 "base_bdevs_list": [ 00:12:20.029 { 00:12:20.029 "name": "BaseBdev1", 00:12:20.029 "uuid": "56079d8c-55a1-415d-9a6c-e792c1848fd5", 00:12:20.029 "is_configured": true, 00:12:20.029 "data_offset": 2048, 00:12:20.029 "data_size": 63488 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "name": "BaseBdev2", 00:12:20.029 "uuid": "891415c9-1d39-43d0-a727-fe53807bfe4e", 00:12:20.029 "is_configured": true, 00:12:20.029 "data_offset": 2048, 00:12:20.029 "data_size": 63488 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "name": "BaseBdev3", 00:12:20.029 "uuid": "514233c5-a98c-4cb7-8909-ee16a0686c16", 00:12:20.029 "is_configured": true, 00:12:20.029 "data_offset": 2048, 00:12:20.029 "data_size": 63488 00:12:20.029 }, 00:12:20.029 { 00:12:20.029 "name": "BaseBdev4", 00:12:20.029 "uuid": "72239d1e-d365-4cb6-b9b9-e399d14ef006", 00:12:20.029 "is_configured": true, 00:12:20.029 "data_offset": 2048, 00:12:20.029 "data_size": 63488 00:12:20.029 } 00:12:20.029 ] 00:12:20.030 } 00:12:20.030 } 00:12:20.030 }' 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:20.030 BaseBdev2 00:12:20.030 BaseBdev3 00:12:20.030 BaseBdev4' 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.030 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.290 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.290 [2024-11-20 09:24:45.592101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.291 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.551 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.551 "name": "Existed_Raid", 00:12:20.551 "uuid": "7c0db380-516a-43ca-be3b-647ec31d2037", 00:12:20.551 "strip_size_kb": 0, 00:12:20.551 "state": "online", 00:12:20.551 "raid_level": "raid1", 00:12:20.551 "superblock": true, 00:12:20.551 "num_base_bdevs": 4, 00:12:20.551 "num_base_bdevs_discovered": 3, 00:12:20.551 "num_base_bdevs_operational": 3, 00:12:20.551 "base_bdevs_list": [ 00:12:20.551 { 00:12:20.551 "name": null, 00:12:20.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.551 "is_configured": false, 00:12:20.551 "data_offset": 0, 00:12:20.551 "data_size": 63488 00:12:20.551 }, 00:12:20.551 { 00:12:20.551 "name": "BaseBdev2", 00:12:20.551 "uuid": "891415c9-1d39-43d0-a727-fe53807bfe4e", 00:12:20.551 "is_configured": true, 00:12:20.551 "data_offset": 2048, 00:12:20.551 "data_size": 63488 00:12:20.551 }, 00:12:20.551 { 00:12:20.551 "name": "BaseBdev3", 00:12:20.551 "uuid": "514233c5-a98c-4cb7-8909-ee16a0686c16", 00:12:20.551 "is_configured": true, 00:12:20.551 "data_offset": 2048, 00:12:20.551 "data_size": 63488 00:12:20.551 }, 00:12:20.551 { 00:12:20.551 "name": "BaseBdev4", 00:12:20.551 "uuid": "72239d1e-d365-4cb6-b9b9-e399d14ef006", 00:12:20.551 "is_configured": true, 00:12:20.551 "data_offset": 2048, 00:12:20.551 "data_size": 63488 00:12:20.551 } 00:12:20.551 ] 00:12:20.551 }' 00:12:20.551 09:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.551 09:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.811 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.811 [2024-11-20 09:24:46.255554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.071 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.071 [2024-11-20 09:24:46.434213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:21.331 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.331 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:21.331 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:21.331 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.332 [2024-11-20 09:24:46.614569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:21.332 [2024-11-20 09:24:46.614764] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.332 [2024-11-20 09:24:46.732581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.332 [2024-11-20 09:24:46.732741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.332 [2024-11-20 09:24:46.732761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.332 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.593 BaseBdev2 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.593 [ 00:12:21.593 { 00:12:21.593 "name": "BaseBdev2", 00:12:21.593 "aliases": [ 00:12:21.593 "6b4f3315-503b-4777-b47b-9af0114dfa40" 00:12:21.593 ], 00:12:21.593 "product_name": "Malloc disk", 00:12:21.593 "block_size": 512, 00:12:21.593 "num_blocks": 65536, 00:12:21.593 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:21.593 "assigned_rate_limits": { 00:12:21.593 "rw_ios_per_sec": 0, 00:12:21.593 "rw_mbytes_per_sec": 0, 00:12:21.593 "r_mbytes_per_sec": 0, 00:12:21.593 "w_mbytes_per_sec": 0 00:12:21.593 }, 00:12:21.593 "claimed": false, 00:12:21.593 "zoned": false, 00:12:21.593 "supported_io_types": { 00:12:21.593 "read": true, 00:12:21.593 "write": true, 00:12:21.593 "unmap": true, 00:12:21.593 "flush": true, 00:12:21.593 "reset": true, 00:12:21.593 "nvme_admin": false, 00:12:21.593 "nvme_io": false, 00:12:21.593 "nvme_io_md": false, 00:12:21.593 "write_zeroes": true, 00:12:21.593 "zcopy": true, 00:12:21.593 "get_zone_info": false, 00:12:21.593 "zone_management": false, 00:12:21.593 "zone_append": false, 00:12:21.593 "compare": false, 00:12:21.593 "compare_and_write": false, 00:12:21.593 "abort": true, 00:12:21.593 "seek_hole": false, 00:12:21.593 "seek_data": false, 00:12:21.593 "copy": true, 00:12:21.593 "nvme_iov_md": false 00:12:21.593 }, 00:12:21.593 "memory_domains": [ 00:12:21.593 { 00:12:21.593 "dma_device_id": "system", 00:12:21.593 "dma_device_type": 1 00:12:21.593 }, 00:12:21.593 { 00:12:21.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.593 "dma_device_type": 2 00:12:21.593 } 00:12:21.593 ], 00:12:21.593 "driver_specific": {} 00:12:21.593 } 00:12:21.593 ] 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:21.593 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 BaseBdev3 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 [ 00:12:21.594 { 00:12:21.594 "name": "BaseBdev3", 00:12:21.594 "aliases": [ 00:12:21.594 "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781" 00:12:21.594 ], 00:12:21.594 "product_name": "Malloc disk", 00:12:21.594 "block_size": 512, 00:12:21.594 "num_blocks": 65536, 00:12:21.594 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:21.594 "assigned_rate_limits": { 00:12:21.594 "rw_ios_per_sec": 0, 00:12:21.594 "rw_mbytes_per_sec": 0, 00:12:21.594 "r_mbytes_per_sec": 0, 00:12:21.594 "w_mbytes_per_sec": 0 00:12:21.594 }, 00:12:21.594 "claimed": false, 00:12:21.594 "zoned": false, 00:12:21.594 "supported_io_types": { 00:12:21.594 "read": true, 00:12:21.594 "write": true, 00:12:21.594 "unmap": true, 00:12:21.594 "flush": true, 00:12:21.594 "reset": true, 00:12:21.594 "nvme_admin": false, 00:12:21.594 "nvme_io": false, 00:12:21.594 "nvme_io_md": false, 00:12:21.594 "write_zeroes": true, 00:12:21.594 "zcopy": true, 00:12:21.594 "get_zone_info": false, 00:12:21.594 "zone_management": false, 00:12:21.594 "zone_append": false, 00:12:21.594 "compare": false, 00:12:21.594 "compare_and_write": false, 00:12:21.594 "abort": true, 00:12:21.594 "seek_hole": false, 00:12:21.594 "seek_data": false, 00:12:21.594 "copy": true, 00:12:21.594 "nvme_iov_md": false 00:12:21.594 }, 00:12:21.594 "memory_domains": [ 00:12:21.594 { 00:12:21.594 "dma_device_id": "system", 00:12:21.594 "dma_device_type": 1 00:12:21.594 }, 00:12:21.594 { 00:12:21.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.594 "dma_device_type": 2 00:12:21.594 } 00:12:21.594 ], 00:12:21.594 "driver_specific": {} 00:12:21.594 } 00:12:21.594 ] 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 09:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 BaseBdev4 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.594 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.861 [ 00:12:21.861 { 00:12:21.861 "name": "BaseBdev4", 00:12:21.861 "aliases": [ 00:12:21.861 "e820cc20-b9c0-4930-8652-f54f66a9c32f" 00:12:21.861 ], 00:12:21.861 "product_name": "Malloc disk", 00:12:21.861 "block_size": 512, 00:12:21.861 "num_blocks": 65536, 00:12:21.861 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:21.861 "assigned_rate_limits": { 00:12:21.861 "rw_ios_per_sec": 0, 00:12:21.861 "rw_mbytes_per_sec": 0, 00:12:21.861 "r_mbytes_per_sec": 0, 00:12:21.861 "w_mbytes_per_sec": 0 00:12:21.861 }, 00:12:21.861 "claimed": false, 00:12:21.861 "zoned": false, 00:12:21.861 "supported_io_types": { 00:12:21.861 "read": true, 00:12:21.861 "write": true, 00:12:21.861 "unmap": true, 00:12:21.861 "flush": true, 00:12:21.861 "reset": true, 00:12:21.861 "nvme_admin": false, 00:12:21.861 "nvme_io": false, 00:12:21.861 "nvme_io_md": false, 00:12:21.861 "write_zeroes": true, 00:12:21.861 "zcopy": true, 00:12:21.861 "get_zone_info": false, 00:12:21.861 "zone_management": false, 00:12:21.861 "zone_append": false, 00:12:21.861 "compare": false, 00:12:21.861 "compare_and_write": false, 00:12:21.861 "abort": true, 00:12:21.861 "seek_hole": false, 00:12:21.861 "seek_data": false, 00:12:21.861 "copy": true, 00:12:21.861 "nvme_iov_md": false 00:12:21.861 }, 00:12:21.861 "memory_domains": [ 00:12:21.861 { 00:12:21.861 "dma_device_id": "system", 00:12:21.861 "dma_device_type": 1 00:12:21.861 }, 00:12:21.861 { 00:12:21.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.861 "dma_device_type": 2 00:12:21.861 } 00:12:21.861 ], 00:12:21.861 "driver_specific": {} 00:12:21.861 } 00:12:21.861 ] 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.861 [2024-11-20 09:24:47.065098] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.861 [2024-11-20 09:24:47.065221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.861 [2024-11-20 09:24:47.065278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.861 [2024-11-20 09:24:47.067489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.861 [2024-11-20 09:24:47.067609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.861 "name": "Existed_Raid", 00:12:21.861 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:21.861 "strip_size_kb": 0, 00:12:21.861 "state": "configuring", 00:12:21.861 "raid_level": "raid1", 00:12:21.861 "superblock": true, 00:12:21.861 "num_base_bdevs": 4, 00:12:21.861 "num_base_bdevs_discovered": 3, 00:12:21.861 "num_base_bdevs_operational": 4, 00:12:21.861 "base_bdevs_list": [ 00:12:21.861 { 00:12:21.861 "name": "BaseBdev1", 00:12:21.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.861 "is_configured": false, 00:12:21.861 "data_offset": 0, 00:12:21.861 "data_size": 0 00:12:21.861 }, 00:12:21.861 { 00:12:21.861 "name": "BaseBdev2", 00:12:21.861 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:21.861 "is_configured": true, 00:12:21.861 "data_offset": 2048, 00:12:21.861 "data_size": 63488 00:12:21.861 }, 00:12:21.861 { 00:12:21.861 "name": "BaseBdev3", 00:12:21.861 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:21.861 "is_configured": true, 00:12:21.861 "data_offset": 2048, 00:12:21.861 "data_size": 63488 00:12:21.861 }, 00:12:21.861 { 00:12:21.861 "name": "BaseBdev4", 00:12:21.861 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:21.861 "is_configured": true, 00:12:21.861 "data_offset": 2048, 00:12:21.861 "data_size": 63488 00:12:21.861 } 00:12:21.861 ] 00:12:21.861 }' 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.861 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.132 [2024-11-20 09:24:47.548338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.132 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.394 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.394 "name": "Existed_Raid", 00:12:22.395 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:22.395 "strip_size_kb": 0, 00:12:22.395 "state": "configuring", 00:12:22.395 "raid_level": "raid1", 00:12:22.395 "superblock": true, 00:12:22.395 "num_base_bdevs": 4, 00:12:22.395 "num_base_bdevs_discovered": 2, 00:12:22.395 "num_base_bdevs_operational": 4, 00:12:22.395 "base_bdevs_list": [ 00:12:22.395 { 00:12:22.395 "name": "BaseBdev1", 00:12:22.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.395 "is_configured": false, 00:12:22.395 "data_offset": 0, 00:12:22.395 "data_size": 0 00:12:22.395 }, 00:12:22.395 { 00:12:22.395 "name": null, 00:12:22.395 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:22.395 "is_configured": false, 00:12:22.395 "data_offset": 0, 00:12:22.395 "data_size": 63488 00:12:22.395 }, 00:12:22.395 { 00:12:22.395 "name": "BaseBdev3", 00:12:22.395 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:22.395 "is_configured": true, 00:12:22.395 "data_offset": 2048, 00:12:22.395 "data_size": 63488 00:12:22.395 }, 00:12:22.395 { 00:12:22.395 "name": "BaseBdev4", 00:12:22.395 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:22.395 "is_configured": true, 00:12:22.395 "data_offset": 2048, 00:12:22.395 "data_size": 63488 00:12:22.395 } 00:12:22.395 ] 00:12:22.395 }' 00:12:22.395 09:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.395 09:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.653 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.653 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.653 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.653 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.653 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.653 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:22.653 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:22.653 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.653 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.912 [2024-11-20 09:24:48.116644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.912 BaseBdev1 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:22.912 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.913 [ 00:12:22.913 { 00:12:22.913 "name": "BaseBdev1", 00:12:22.913 "aliases": [ 00:12:22.913 "f2353346-df9a-45c5-b29e-ed61f0c2781d" 00:12:22.913 ], 00:12:22.913 "product_name": "Malloc disk", 00:12:22.913 "block_size": 512, 00:12:22.913 "num_blocks": 65536, 00:12:22.913 "uuid": "f2353346-df9a-45c5-b29e-ed61f0c2781d", 00:12:22.913 "assigned_rate_limits": { 00:12:22.913 "rw_ios_per_sec": 0, 00:12:22.913 "rw_mbytes_per_sec": 0, 00:12:22.913 "r_mbytes_per_sec": 0, 00:12:22.913 "w_mbytes_per_sec": 0 00:12:22.913 }, 00:12:22.913 "claimed": true, 00:12:22.913 "claim_type": "exclusive_write", 00:12:22.913 "zoned": false, 00:12:22.913 "supported_io_types": { 00:12:22.913 "read": true, 00:12:22.913 "write": true, 00:12:22.913 "unmap": true, 00:12:22.913 "flush": true, 00:12:22.913 "reset": true, 00:12:22.913 "nvme_admin": false, 00:12:22.913 "nvme_io": false, 00:12:22.913 "nvme_io_md": false, 00:12:22.913 "write_zeroes": true, 00:12:22.913 "zcopy": true, 00:12:22.913 "get_zone_info": false, 00:12:22.913 "zone_management": false, 00:12:22.913 "zone_append": false, 00:12:22.913 "compare": false, 00:12:22.913 "compare_and_write": false, 00:12:22.913 "abort": true, 00:12:22.913 "seek_hole": false, 00:12:22.913 "seek_data": false, 00:12:22.913 "copy": true, 00:12:22.913 "nvme_iov_md": false 00:12:22.913 }, 00:12:22.913 "memory_domains": [ 00:12:22.913 { 00:12:22.913 "dma_device_id": "system", 00:12:22.913 "dma_device_type": 1 00:12:22.913 }, 00:12:22.913 { 00:12:22.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.913 "dma_device_type": 2 00:12:22.913 } 00:12:22.913 ], 00:12:22.913 "driver_specific": {} 00:12:22.913 } 00:12:22.913 ] 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.913 "name": "Existed_Raid", 00:12:22.913 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:22.913 "strip_size_kb": 0, 00:12:22.913 "state": "configuring", 00:12:22.913 "raid_level": "raid1", 00:12:22.913 "superblock": true, 00:12:22.913 "num_base_bdevs": 4, 00:12:22.913 "num_base_bdevs_discovered": 3, 00:12:22.913 "num_base_bdevs_operational": 4, 00:12:22.913 "base_bdevs_list": [ 00:12:22.913 { 00:12:22.913 "name": "BaseBdev1", 00:12:22.913 "uuid": "f2353346-df9a-45c5-b29e-ed61f0c2781d", 00:12:22.913 "is_configured": true, 00:12:22.913 "data_offset": 2048, 00:12:22.913 "data_size": 63488 00:12:22.913 }, 00:12:22.913 { 00:12:22.913 "name": null, 00:12:22.913 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:22.913 "is_configured": false, 00:12:22.913 "data_offset": 0, 00:12:22.913 "data_size": 63488 00:12:22.913 }, 00:12:22.913 { 00:12:22.913 "name": "BaseBdev3", 00:12:22.913 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:22.913 "is_configured": true, 00:12:22.913 "data_offset": 2048, 00:12:22.913 "data_size": 63488 00:12:22.913 }, 00:12:22.913 { 00:12:22.913 "name": "BaseBdev4", 00:12:22.913 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:22.913 "is_configured": true, 00:12:22.913 "data_offset": 2048, 00:12:22.913 "data_size": 63488 00:12:22.913 } 00:12:22.913 ] 00:12:22.913 }' 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.913 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.480 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.480 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:23.480 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.480 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.480 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.480 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:23.480 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:23.480 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.480 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.481 [2024-11-20 09:24:48.707839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.481 "name": "Existed_Raid", 00:12:23.481 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:23.481 "strip_size_kb": 0, 00:12:23.481 "state": "configuring", 00:12:23.481 "raid_level": "raid1", 00:12:23.481 "superblock": true, 00:12:23.481 "num_base_bdevs": 4, 00:12:23.481 "num_base_bdevs_discovered": 2, 00:12:23.481 "num_base_bdevs_operational": 4, 00:12:23.481 "base_bdevs_list": [ 00:12:23.481 { 00:12:23.481 "name": "BaseBdev1", 00:12:23.481 "uuid": "f2353346-df9a-45c5-b29e-ed61f0c2781d", 00:12:23.481 "is_configured": true, 00:12:23.481 "data_offset": 2048, 00:12:23.481 "data_size": 63488 00:12:23.481 }, 00:12:23.481 { 00:12:23.481 "name": null, 00:12:23.481 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:23.481 "is_configured": false, 00:12:23.481 "data_offset": 0, 00:12:23.481 "data_size": 63488 00:12:23.481 }, 00:12:23.481 { 00:12:23.481 "name": null, 00:12:23.481 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:23.481 "is_configured": false, 00:12:23.481 "data_offset": 0, 00:12:23.481 "data_size": 63488 00:12:23.481 }, 00:12:23.481 { 00:12:23.481 "name": "BaseBdev4", 00:12:23.481 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:23.481 "is_configured": true, 00:12:23.481 "data_offset": 2048, 00:12:23.481 "data_size": 63488 00:12:23.481 } 00:12:23.481 ] 00:12:23.481 }' 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.481 09:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.050 [2024-11-20 09:24:49.283160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.050 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.050 "name": "Existed_Raid", 00:12:24.050 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:24.050 "strip_size_kb": 0, 00:12:24.050 "state": "configuring", 00:12:24.050 "raid_level": "raid1", 00:12:24.050 "superblock": true, 00:12:24.050 "num_base_bdevs": 4, 00:12:24.050 "num_base_bdevs_discovered": 3, 00:12:24.050 "num_base_bdevs_operational": 4, 00:12:24.050 "base_bdevs_list": [ 00:12:24.050 { 00:12:24.050 "name": "BaseBdev1", 00:12:24.050 "uuid": "f2353346-df9a-45c5-b29e-ed61f0c2781d", 00:12:24.050 "is_configured": true, 00:12:24.050 "data_offset": 2048, 00:12:24.050 "data_size": 63488 00:12:24.050 }, 00:12:24.050 { 00:12:24.050 "name": null, 00:12:24.050 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:24.050 "is_configured": false, 00:12:24.051 "data_offset": 0, 00:12:24.051 "data_size": 63488 00:12:24.051 }, 00:12:24.051 { 00:12:24.051 "name": "BaseBdev3", 00:12:24.051 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:24.051 "is_configured": true, 00:12:24.051 "data_offset": 2048, 00:12:24.051 "data_size": 63488 00:12:24.051 }, 00:12:24.051 { 00:12:24.051 "name": "BaseBdev4", 00:12:24.051 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:24.051 "is_configured": true, 00:12:24.051 "data_offset": 2048, 00:12:24.051 "data_size": 63488 00:12:24.051 } 00:12:24.051 ] 00:12:24.051 }' 00:12:24.051 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.051 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 [2024-11-20 09:24:49.838323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.619 09:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.619 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.619 "name": "Existed_Raid", 00:12:24.619 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:24.619 "strip_size_kb": 0, 00:12:24.619 "state": "configuring", 00:12:24.619 "raid_level": "raid1", 00:12:24.619 "superblock": true, 00:12:24.620 "num_base_bdevs": 4, 00:12:24.620 "num_base_bdevs_discovered": 2, 00:12:24.620 "num_base_bdevs_operational": 4, 00:12:24.620 "base_bdevs_list": [ 00:12:24.620 { 00:12:24.620 "name": null, 00:12:24.620 "uuid": "f2353346-df9a-45c5-b29e-ed61f0c2781d", 00:12:24.620 "is_configured": false, 00:12:24.620 "data_offset": 0, 00:12:24.620 "data_size": 63488 00:12:24.620 }, 00:12:24.620 { 00:12:24.620 "name": null, 00:12:24.620 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:24.620 "is_configured": false, 00:12:24.620 "data_offset": 0, 00:12:24.620 "data_size": 63488 00:12:24.620 }, 00:12:24.620 { 00:12:24.620 "name": "BaseBdev3", 00:12:24.620 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:24.620 "is_configured": true, 00:12:24.620 "data_offset": 2048, 00:12:24.620 "data_size": 63488 00:12:24.620 }, 00:12:24.620 { 00:12:24.620 "name": "BaseBdev4", 00:12:24.620 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:24.620 "is_configured": true, 00:12:24.620 "data_offset": 2048, 00:12:24.620 "data_size": 63488 00:12:24.620 } 00:12:24.620 ] 00:12:24.620 }' 00:12:24.620 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.620 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 [2024-11-20 09:24:50.459595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.189 "name": "Existed_Raid", 00:12:25.189 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:25.189 "strip_size_kb": 0, 00:12:25.189 "state": "configuring", 00:12:25.189 "raid_level": "raid1", 00:12:25.189 "superblock": true, 00:12:25.189 "num_base_bdevs": 4, 00:12:25.189 "num_base_bdevs_discovered": 3, 00:12:25.189 "num_base_bdevs_operational": 4, 00:12:25.189 "base_bdevs_list": [ 00:12:25.189 { 00:12:25.189 "name": null, 00:12:25.189 "uuid": "f2353346-df9a-45c5-b29e-ed61f0c2781d", 00:12:25.189 "is_configured": false, 00:12:25.189 "data_offset": 0, 00:12:25.189 "data_size": 63488 00:12:25.189 }, 00:12:25.189 { 00:12:25.189 "name": "BaseBdev2", 00:12:25.189 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:25.189 "is_configured": true, 00:12:25.189 "data_offset": 2048, 00:12:25.189 "data_size": 63488 00:12:25.189 }, 00:12:25.189 { 00:12:25.189 "name": "BaseBdev3", 00:12:25.189 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:25.189 "is_configured": true, 00:12:25.189 "data_offset": 2048, 00:12:25.189 "data_size": 63488 00:12:25.189 }, 00:12:25.189 { 00:12:25.189 "name": "BaseBdev4", 00:12:25.189 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:25.189 "is_configured": true, 00:12:25.189 "data_offset": 2048, 00:12:25.189 "data_size": 63488 00:12:25.189 } 00:12:25.189 ] 00:12:25.189 }' 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.189 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f2353346-df9a-45c5-b29e-ed61f0c2781d 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 09:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 [2024-11-20 09:24:51.040054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:25.757 [2024-11-20 09:24:51.040450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:25.757 [2024-11-20 09:24:51.040474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.757 [2024-11-20 09:24:51.040769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:25.757 NewBaseBdev 00:12:25.757 [2024-11-20 09:24:51.040979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:25.757 [2024-11-20 09:24:51.040997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:25.757 [2024-11-20 09:24:51.041179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 [ 00:12:25.757 { 00:12:25.757 "name": "NewBaseBdev", 00:12:25.757 "aliases": [ 00:12:25.757 "f2353346-df9a-45c5-b29e-ed61f0c2781d" 00:12:25.757 ], 00:12:25.757 "product_name": "Malloc disk", 00:12:25.757 "block_size": 512, 00:12:25.757 "num_blocks": 65536, 00:12:25.757 "uuid": "f2353346-df9a-45c5-b29e-ed61f0c2781d", 00:12:25.757 "assigned_rate_limits": { 00:12:25.757 "rw_ios_per_sec": 0, 00:12:25.757 "rw_mbytes_per_sec": 0, 00:12:25.757 "r_mbytes_per_sec": 0, 00:12:25.757 "w_mbytes_per_sec": 0 00:12:25.757 }, 00:12:25.757 "claimed": true, 00:12:25.757 "claim_type": "exclusive_write", 00:12:25.757 "zoned": false, 00:12:25.757 "supported_io_types": { 00:12:25.757 "read": true, 00:12:25.757 "write": true, 00:12:25.757 "unmap": true, 00:12:25.757 "flush": true, 00:12:25.757 "reset": true, 00:12:25.757 "nvme_admin": false, 00:12:25.757 "nvme_io": false, 00:12:25.757 "nvme_io_md": false, 00:12:25.757 "write_zeroes": true, 00:12:25.757 "zcopy": true, 00:12:25.757 "get_zone_info": false, 00:12:25.757 "zone_management": false, 00:12:25.757 "zone_append": false, 00:12:25.757 "compare": false, 00:12:25.757 "compare_and_write": false, 00:12:25.757 "abort": true, 00:12:25.757 "seek_hole": false, 00:12:25.757 "seek_data": false, 00:12:25.757 "copy": true, 00:12:25.757 "nvme_iov_md": false 00:12:25.757 }, 00:12:25.757 "memory_domains": [ 00:12:25.757 { 00:12:25.757 "dma_device_id": "system", 00:12:25.757 "dma_device_type": 1 00:12:25.757 }, 00:12:25.757 { 00:12:25.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.757 "dma_device_type": 2 00:12:25.757 } 00:12:25.757 ], 00:12:25.757 "driver_specific": {} 00:12:25.757 } 00:12:25.757 ] 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.757 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.758 "name": "Existed_Raid", 00:12:25.758 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:25.758 "strip_size_kb": 0, 00:12:25.758 "state": "online", 00:12:25.758 "raid_level": "raid1", 00:12:25.758 "superblock": true, 00:12:25.758 "num_base_bdevs": 4, 00:12:25.758 "num_base_bdevs_discovered": 4, 00:12:25.758 "num_base_bdevs_operational": 4, 00:12:25.758 "base_bdevs_list": [ 00:12:25.758 { 00:12:25.758 "name": "NewBaseBdev", 00:12:25.758 "uuid": "f2353346-df9a-45c5-b29e-ed61f0c2781d", 00:12:25.758 "is_configured": true, 00:12:25.758 "data_offset": 2048, 00:12:25.758 "data_size": 63488 00:12:25.758 }, 00:12:25.758 { 00:12:25.758 "name": "BaseBdev2", 00:12:25.758 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:25.758 "is_configured": true, 00:12:25.758 "data_offset": 2048, 00:12:25.758 "data_size": 63488 00:12:25.758 }, 00:12:25.758 { 00:12:25.758 "name": "BaseBdev3", 00:12:25.758 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:25.758 "is_configured": true, 00:12:25.758 "data_offset": 2048, 00:12:25.758 "data_size": 63488 00:12:25.758 }, 00:12:25.758 { 00:12:25.758 "name": "BaseBdev4", 00:12:25.758 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:25.758 "is_configured": true, 00:12:25.758 "data_offset": 2048, 00:12:25.758 "data_size": 63488 00:12:25.758 } 00:12:25.758 ] 00:12:25.758 }' 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.758 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.325 [2024-11-20 09:24:51.579722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.325 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:26.325 "name": "Existed_Raid", 00:12:26.325 "aliases": [ 00:12:26.325 "49fa13b0-8876-4772-8331-87959eaa35af" 00:12:26.325 ], 00:12:26.325 "product_name": "Raid Volume", 00:12:26.325 "block_size": 512, 00:12:26.325 "num_blocks": 63488, 00:12:26.325 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:26.325 "assigned_rate_limits": { 00:12:26.325 "rw_ios_per_sec": 0, 00:12:26.325 "rw_mbytes_per_sec": 0, 00:12:26.325 "r_mbytes_per_sec": 0, 00:12:26.325 "w_mbytes_per_sec": 0 00:12:26.325 }, 00:12:26.325 "claimed": false, 00:12:26.325 "zoned": false, 00:12:26.325 "supported_io_types": { 00:12:26.325 "read": true, 00:12:26.325 "write": true, 00:12:26.325 "unmap": false, 00:12:26.325 "flush": false, 00:12:26.325 "reset": true, 00:12:26.325 "nvme_admin": false, 00:12:26.325 "nvme_io": false, 00:12:26.325 "nvme_io_md": false, 00:12:26.325 "write_zeroes": true, 00:12:26.325 "zcopy": false, 00:12:26.325 "get_zone_info": false, 00:12:26.325 "zone_management": false, 00:12:26.325 "zone_append": false, 00:12:26.325 "compare": false, 00:12:26.325 "compare_and_write": false, 00:12:26.325 "abort": false, 00:12:26.325 "seek_hole": false, 00:12:26.325 "seek_data": false, 00:12:26.325 "copy": false, 00:12:26.325 "nvme_iov_md": false 00:12:26.325 }, 00:12:26.325 "memory_domains": [ 00:12:26.325 { 00:12:26.325 "dma_device_id": "system", 00:12:26.325 "dma_device_type": 1 00:12:26.325 }, 00:12:26.325 { 00:12:26.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.325 "dma_device_type": 2 00:12:26.325 }, 00:12:26.325 { 00:12:26.325 "dma_device_id": "system", 00:12:26.325 "dma_device_type": 1 00:12:26.325 }, 00:12:26.325 { 00:12:26.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.325 "dma_device_type": 2 00:12:26.325 }, 00:12:26.325 { 00:12:26.325 "dma_device_id": "system", 00:12:26.325 "dma_device_type": 1 00:12:26.325 }, 00:12:26.325 { 00:12:26.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.325 "dma_device_type": 2 00:12:26.325 }, 00:12:26.325 { 00:12:26.325 "dma_device_id": "system", 00:12:26.325 "dma_device_type": 1 00:12:26.325 }, 00:12:26.325 { 00:12:26.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.325 "dma_device_type": 2 00:12:26.325 } 00:12:26.325 ], 00:12:26.325 "driver_specific": { 00:12:26.325 "raid": { 00:12:26.325 "uuid": "49fa13b0-8876-4772-8331-87959eaa35af", 00:12:26.325 "strip_size_kb": 0, 00:12:26.325 "state": "online", 00:12:26.325 "raid_level": "raid1", 00:12:26.325 "superblock": true, 00:12:26.325 "num_base_bdevs": 4, 00:12:26.325 "num_base_bdevs_discovered": 4, 00:12:26.325 "num_base_bdevs_operational": 4, 00:12:26.325 "base_bdevs_list": [ 00:12:26.325 { 00:12:26.325 "name": "NewBaseBdev", 00:12:26.325 "uuid": "f2353346-df9a-45c5-b29e-ed61f0c2781d", 00:12:26.325 "is_configured": true, 00:12:26.325 "data_offset": 2048, 00:12:26.325 "data_size": 63488 00:12:26.325 }, 00:12:26.325 { 00:12:26.325 "name": "BaseBdev2", 00:12:26.326 "uuid": "6b4f3315-503b-4777-b47b-9af0114dfa40", 00:12:26.326 "is_configured": true, 00:12:26.326 "data_offset": 2048, 00:12:26.326 "data_size": 63488 00:12:26.326 }, 00:12:26.326 { 00:12:26.326 "name": "BaseBdev3", 00:12:26.326 "uuid": "5e9a74ed-10f5-4b0e-8204-8a95ee4fd781", 00:12:26.326 "is_configured": true, 00:12:26.326 "data_offset": 2048, 00:12:26.326 "data_size": 63488 00:12:26.326 }, 00:12:26.326 { 00:12:26.326 "name": "BaseBdev4", 00:12:26.326 "uuid": "e820cc20-b9c0-4930-8652-f54f66a9c32f", 00:12:26.326 "is_configured": true, 00:12:26.326 "data_offset": 2048, 00:12:26.326 "data_size": 63488 00:12:26.326 } 00:12:26.326 ] 00:12:26.326 } 00:12:26.326 } 00:12:26.326 }' 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:26.326 BaseBdev2 00:12:26.326 BaseBdev3 00:12:26.326 BaseBdev4' 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.326 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.585 [2024-11-20 09:24:51.894744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:26.585 [2024-11-20 09:24:51.894775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.585 [2024-11-20 09:24:51.894871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.585 [2024-11-20 09:24:51.895195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.585 [2024-11-20 09:24:51.895210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74224 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74224 ']' 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74224 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74224 00:12:26.585 killing process with pid 74224 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74224' 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74224 00:12:26.585 [2024-11-20 09:24:51.942065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.585 09:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74224 00:12:27.153 [2024-11-20 09:24:52.408685] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.533 09:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:28.533 00:12:28.533 real 0m12.898s 00:12:28.533 user 0m20.396s 00:12:28.533 sys 0m2.336s 00:12:28.533 09:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.533 09:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.533 ************************************ 00:12:28.533 END TEST raid_state_function_test_sb 00:12:28.533 ************************************ 00:12:28.533 09:24:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:28.533 09:24:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:28.533 09:24:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.533 09:24:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.533 ************************************ 00:12:28.533 START TEST raid_superblock_test 00:12:28.533 ************************************ 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74909 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74909 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74909 ']' 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.533 09:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.533 [2024-11-20 09:24:53.865294] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:28.533 [2024-11-20 09:24:53.865456] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74909 ] 00:12:28.793 [2024-11-20 09:24:54.047590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.793 [2024-11-20 09:24:54.181254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.052 [2024-11-20 09:24:54.420085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.052 [2024-11-20 09:24:54.420136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.621 malloc1 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.621 [2024-11-20 09:24:54.853298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:29.621 [2024-11-20 09:24:54.853396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.621 [2024-11-20 09:24:54.853438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:29.621 [2024-11-20 09:24:54.853460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.621 [2024-11-20 09:24:54.856152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.621 [2024-11-20 09:24:54.856193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:29.621 pt1 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.621 malloc2 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.621 [2024-11-20 09:24:54.918271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:29.621 [2024-11-20 09:24:54.918362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.621 [2024-11-20 09:24:54.918392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:29.621 [2024-11-20 09:24:54.918403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.621 [2024-11-20 09:24:54.921231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.621 [2024-11-20 09:24:54.921286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:29.621 pt2 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.621 malloc3 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.621 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.621 [2024-11-20 09:24:54.991893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:29.621 [2024-11-20 09:24:54.991962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.621 [2024-11-20 09:24:54.991987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:29.621 [2024-11-20 09:24:54.991996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.621 [2024-11-20 09:24:54.994427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.622 [2024-11-20 09:24:54.994471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:29.622 pt3 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.622 09:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.622 malloc4 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.622 [2024-11-20 09:24:55.057938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:29.622 [2024-11-20 09:24:55.058016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.622 [2024-11-20 09:24:55.058040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:29.622 [2024-11-20 09:24:55.058051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.622 [2024-11-20 09:24:55.060844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.622 [2024-11-20 09:24:55.060883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:29.622 pt4 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.622 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.622 [2024-11-20 09:24:55.069969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:29.622 [2024-11-20 09:24:55.072526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.622 [2024-11-20 09:24:55.072608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:29.622 [2024-11-20 09:24:55.072661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:29.622 [2024-11-20 09:24:55.072911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:29.622 [2024-11-20 09:24:55.072936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.622 [2024-11-20 09:24:55.073292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:29.622 [2024-11-20 09:24:55.073576] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:29.622 [2024-11-20 09:24:55.073602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:29.622 [2024-11-20 09:24:55.073852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.882 "name": "raid_bdev1", 00:12:29.882 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:29.882 "strip_size_kb": 0, 00:12:29.882 "state": "online", 00:12:29.882 "raid_level": "raid1", 00:12:29.882 "superblock": true, 00:12:29.882 "num_base_bdevs": 4, 00:12:29.882 "num_base_bdevs_discovered": 4, 00:12:29.882 "num_base_bdevs_operational": 4, 00:12:29.882 "base_bdevs_list": [ 00:12:29.882 { 00:12:29.882 "name": "pt1", 00:12:29.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:29.882 "is_configured": true, 00:12:29.882 "data_offset": 2048, 00:12:29.882 "data_size": 63488 00:12:29.882 }, 00:12:29.882 { 00:12:29.882 "name": "pt2", 00:12:29.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.882 "is_configured": true, 00:12:29.882 "data_offset": 2048, 00:12:29.882 "data_size": 63488 00:12:29.882 }, 00:12:29.882 { 00:12:29.882 "name": "pt3", 00:12:29.882 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.882 "is_configured": true, 00:12:29.882 "data_offset": 2048, 00:12:29.882 "data_size": 63488 00:12:29.882 }, 00:12:29.882 { 00:12:29.882 "name": "pt4", 00:12:29.882 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.882 "is_configured": true, 00:12:29.882 "data_offset": 2048, 00:12:29.882 "data_size": 63488 00:12:29.882 } 00:12:29.882 ] 00:12:29.882 }' 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.882 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.142 [2024-11-20 09:24:55.545582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:30.142 "name": "raid_bdev1", 00:12:30.142 "aliases": [ 00:12:30.142 "3a049ddc-7985-4942-887d-fd0602494b8d" 00:12:30.142 ], 00:12:30.142 "product_name": "Raid Volume", 00:12:30.142 "block_size": 512, 00:12:30.142 "num_blocks": 63488, 00:12:30.142 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:30.142 "assigned_rate_limits": { 00:12:30.142 "rw_ios_per_sec": 0, 00:12:30.142 "rw_mbytes_per_sec": 0, 00:12:30.142 "r_mbytes_per_sec": 0, 00:12:30.142 "w_mbytes_per_sec": 0 00:12:30.142 }, 00:12:30.142 "claimed": false, 00:12:30.142 "zoned": false, 00:12:30.142 "supported_io_types": { 00:12:30.142 "read": true, 00:12:30.142 "write": true, 00:12:30.142 "unmap": false, 00:12:30.142 "flush": false, 00:12:30.142 "reset": true, 00:12:30.142 "nvme_admin": false, 00:12:30.142 "nvme_io": false, 00:12:30.142 "nvme_io_md": false, 00:12:30.142 "write_zeroes": true, 00:12:30.142 "zcopy": false, 00:12:30.142 "get_zone_info": false, 00:12:30.142 "zone_management": false, 00:12:30.142 "zone_append": false, 00:12:30.142 "compare": false, 00:12:30.142 "compare_and_write": false, 00:12:30.142 "abort": false, 00:12:30.142 "seek_hole": false, 00:12:30.142 "seek_data": false, 00:12:30.142 "copy": false, 00:12:30.142 "nvme_iov_md": false 00:12:30.142 }, 00:12:30.142 "memory_domains": [ 00:12:30.142 { 00:12:30.142 "dma_device_id": "system", 00:12:30.142 "dma_device_type": 1 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.142 "dma_device_type": 2 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "dma_device_id": "system", 00:12:30.142 "dma_device_type": 1 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.142 "dma_device_type": 2 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "dma_device_id": "system", 00:12:30.142 "dma_device_type": 1 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.142 "dma_device_type": 2 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "dma_device_id": "system", 00:12:30.142 "dma_device_type": 1 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.142 "dma_device_type": 2 00:12:30.142 } 00:12:30.142 ], 00:12:30.142 "driver_specific": { 00:12:30.142 "raid": { 00:12:30.142 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:30.142 "strip_size_kb": 0, 00:12:30.142 "state": "online", 00:12:30.142 "raid_level": "raid1", 00:12:30.142 "superblock": true, 00:12:30.142 "num_base_bdevs": 4, 00:12:30.142 "num_base_bdevs_discovered": 4, 00:12:30.142 "num_base_bdevs_operational": 4, 00:12:30.142 "base_bdevs_list": [ 00:12:30.142 { 00:12:30.142 "name": "pt1", 00:12:30.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.142 "is_configured": true, 00:12:30.142 "data_offset": 2048, 00:12:30.142 "data_size": 63488 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "name": "pt2", 00:12:30.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.142 "is_configured": true, 00:12:30.142 "data_offset": 2048, 00:12:30.142 "data_size": 63488 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "name": "pt3", 00:12:30.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.142 "is_configured": true, 00:12:30.142 "data_offset": 2048, 00:12:30.142 "data_size": 63488 00:12:30.142 }, 00:12:30.142 { 00:12:30.142 "name": "pt4", 00:12:30.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.142 "is_configured": true, 00:12:30.142 "data_offset": 2048, 00:12:30.142 "data_size": 63488 00:12:30.142 } 00:12:30.142 ] 00:12:30.142 } 00:12:30.142 } 00:12:30.142 }' 00:12:30.142 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:30.402 pt2 00:12:30.402 pt3 00:12:30.402 pt4' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.402 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 [2024-11-20 09:24:55.880961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a049ddc-7985-4942-887d-fd0602494b8d 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3a049ddc-7985-4942-887d-fd0602494b8d ']' 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 [2024-11-20 09:24:55.912581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.663 [2024-11-20 09:24:55.912616] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.663 [2024-11-20 09:24:55.912720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.663 [2024-11-20 09:24:55.912818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.663 [2024-11-20 09:24:55.912836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 09:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 [2024-11-20 09:24:56.084299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:30.664 [2024-11-20 09:24:56.086591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:30.664 [2024-11-20 09:24:56.086653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:30.664 [2024-11-20 09:24:56.086688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:30.664 [2024-11-20 09:24:56.086743] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:30.664 [2024-11-20 09:24:56.086805] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:30.664 [2024-11-20 09:24:56.086825] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:30.664 [2024-11-20 09:24:56.086844] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:30.664 [2024-11-20 09:24:56.086857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.664 [2024-11-20 09:24:56.086869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:30.664 request: 00:12:30.664 { 00:12:30.664 "name": "raid_bdev1", 00:12:30.664 "raid_level": "raid1", 00:12:30.664 "base_bdevs": [ 00:12:30.664 "malloc1", 00:12:30.664 "malloc2", 00:12:30.664 "malloc3", 00:12:30.664 "malloc4" 00:12:30.664 ], 00:12:30.664 "superblock": false, 00:12:30.664 "method": "bdev_raid_create", 00:12:30.664 "req_id": 1 00:12:30.664 } 00:12:30.664 Got JSON-RPC error response 00:12:30.664 response: 00:12:30.664 { 00:12:30.664 "code": -17, 00:12:30.664 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:30.664 } 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.664 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.923 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:30.923 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:30.923 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:30.923 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.923 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.923 [2024-11-20 09:24:56.148121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:30.923 [2024-11-20 09:24:56.148185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.923 [2024-11-20 09:24:56.148206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:30.923 [2024-11-20 09:24:56.148220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.924 [2024-11-20 09:24:56.150896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.924 [2024-11-20 09:24:56.150940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:30.924 [2024-11-20 09:24:56.151032] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:30.924 [2024-11-20 09:24:56.151099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:30.924 pt1 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.924 "name": "raid_bdev1", 00:12:30.924 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:30.924 "strip_size_kb": 0, 00:12:30.924 "state": "configuring", 00:12:30.924 "raid_level": "raid1", 00:12:30.924 "superblock": true, 00:12:30.924 "num_base_bdevs": 4, 00:12:30.924 "num_base_bdevs_discovered": 1, 00:12:30.924 "num_base_bdevs_operational": 4, 00:12:30.924 "base_bdevs_list": [ 00:12:30.924 { 00:12:30.924 "name": "pt1", 00:12:30.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.924 "is_configured": true, 00:12:30.924 "data_offset": 2048, 00:12:30.924 "data_size": 63488 00:12:30.924 }, 00:12:30.924 { 00:12:30.924 "name": null, 00:12:30.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.924 "is_configured": false, 00:12:30.924 "data_offset": 2048, 00:12:30.924 "data_size": 63488 00:12:30.924 }, 00:12:30.924 { 00:12:30.924 "name": null, 00:12:30.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.924 "is_configured": false, 00:12:30.924 "data_offset": 2048, 00:12:30.924 "data_size": 63488 00:12:30.924 }, 00:12:30.924 { 00:12:30.924 "name": null, 00:12:30.924 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.924 "is_configured": false, 00:12:30.924 "data_offset": 2048, 00:12:30.924 "data_size": 63488 00:12:30.924 } 00:12:30.924 ] 00:12:30.924 }' 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.924 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.186 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:31.186 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.186 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.186 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.446 [2024-11-20 09:24:56.639588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.446 [2024-11-20 09:24:56.639742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.446 [2024-11-20 09:24:56.639776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:31.446 [2024-11-20 09:24:56.639795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.446 [2024-11-20 09:24:56.640394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.446 [2024-11-20 09:24:56.640444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.446 [2024-11-20 09:24:56.640557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:31.446 [2024-11-20 09:24:56.640605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.446 pt2 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.446 [2024-11-20 09:24:56.651582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.446 "name": "raid_bdev1", 00:12:31.446 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:31.446 "strip_size_kb": 0, 00:12:31.446 "state": "configuring", 00:12:31.446 "raid_level": "raid1", 00:12:31.446 "superblock": true, 00:12:31.446 "num_base_bdevs": 4, 00:12:31.446 "num_base_bdevs_discovered": 1, 00:12:31.446 "num_base_bdevs_operational": 4, 00:12:31.446 "base_bdevs_list": [ 00:12:31.446 { 00:12:31.446 "name": "pt1", 00:12:31.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.446 "is_configured": true, 00:12:31.446 "data_offset": 2048, 00:12:31.446 "data_size": 63488 00:12:31.446 }, 00:12:31.446 { 00:12:31.446 "name": null, 00:12:31.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.446 "is_configured": false, 00:12:31.446 "data_offset": 0, 00:12:31.446 "data_size": 63488 00:12:31.446 }, 00:12:31.446 { 00:12:31.446 "name": null, 00:12:31.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.446 "is_configured": false, 00:12:31.446 "data_offset": 2048, 00:12:31.446 "data_size": 63488 00:12:31.446 }, 00:12:31.446 { 00:12:31.446 "name": null, 00:12:31.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:31.446 "is_configured": false, 00:12:31.446 "data_offset": 2048, 00:12:31.446 "data_size": 63488 00:12:31.446 } 00:12:31.446 ] 00:12:31.446 }' 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.446 09:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.706 [2024-11-20 09:24:57.134726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.706 [2024-11-20 09:24:57.134822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.706 [2024-11-20 09:24:57.134853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:31.706 [2024-11-20 09:24:57.134865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.706 [2024-11-20 09:24:57.135416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.706 [2024-11-20 09:24:57.135460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.706 [2024-11-20 09:24:57.135574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:31.706 [2024-11-20 09:24:57.135619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.706 pt2 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.706 [2024-11-20 09:24:57.146695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:31.706 [2024-11-20 09:24:57.146774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.706 [2024-11-20 09:24:57.146799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:31.706 [2024-11-20 09:24:57.146809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.706 [2024-11-20 09:24:57.147345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.706 [2024-11-20 09:24:57.147370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:31.706 [2024-11-20 09:24:57.147487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:31.706 [2024-11-20 09:24:57.147520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:31.706 pt3 00:12:31.706 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.707 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:31.707 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:31.707 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:31.707 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.707 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.707 [2024-11-20 09:24:57.158612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:31.707 [2024-11-20 09:24:57.158665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.707 [2024-11-20 09:24:57.158684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:31.707 [2024-11-20 09:24:57.158693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.707 [2024-11-20 09:24:57.159156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.707 [2024-11-20 09:24:57.159178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:31.707 [2024-11-20 09:24:57.159250] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:31.707 [2024-11-20 09:24:57.159273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:31.707 [2024-11-20 09:24:57.159456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:31.707 [2024-11-20 09:24:57.159469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.707 [2024-11-20 09:24:57.159781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:31.707 [2024-11-20 09:24:57.159961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:31.707 [2024-11-20 09:24:57.159980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:31.966 [2024-11-20 09:24:57.160141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.966 pt4 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.966 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.967 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.967 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.967 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.967 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.967 "name": "raid_bdev1", 00:12:31.967 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:31.967 "strip_size_kb": 0, 00:12:31.967 "state": "online", 00:12:31.967 "raid_level": "raid1", 00:12:31.967 "superblock": true, 00:12:31.967 "num_base_bdevs": 4, 00:12:31.967 "num_base_bdevs_discovered": 4, 00:12:31.967 "num_base_bdevs_operational": 4, 00:12:31.967 "base_bdevs_list": [ 00:12:31.967 { 00:12:31.967 "name": "pt1", 00:12:31.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.967 "is_configured": true, 00:12:31.967 "data_offset": 2048, 00:12:31.967 "data_size": 63488 00:12:31.967 }, 00:12:31.967 { 00:12:31.967 "name": "pt2", 00:12:31.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.967 "is_configured": true, 00:12:31.967 "data_offset": 2048, 00:12:31.967 "data_size": 63488 00:12:31.967 }, 00:12:31.967 { 00:12:31.967 "name": "pt3", 00:12:31.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.967 "is_configured": true, 00:12:31.967 "data_offset": 2048, 00:12:31.967 "data_size": 63488 00:12:31.967 }, 00:12:31.967 { 00:12:31.967 "name": "pt4", 00:12:31.967 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:31.967 "is_configured": true, 00:12:31.967 "data_offset": 2048, 00:12:31.967 "data_size": 63488 00:12:31.967 } 00:12:31.967 ] 00:12:31.967 }' 00:12:31.967 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.967 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:32.226 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 [2024-11-20 09:24:57.670289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.486 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.486 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:32.486 "name": "raid_bdev1", 00:12:32.486 "aliases": [ 00:12:32.486 "3a049ddc-7985-4942-887d-fd0602494b8d" 00:12:32.486 ], 00:12:32.486 "product_name": "Raid Volume", 00:12:32.486 "block_size": 512, 00:12:32.486 "num_blocks": 63488, 00:12:32.486 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:32.486 "assigned_rate_limits": { 00:12:32.486 "rw_ios_per_sec": 0, 00:12:32.486 "rw_mbytes_per_sec": 0, 00:12:32.486 "r_mbytes_per_sec": 0, 00:12:32.486 "w_mbytes_per_sec": 0 00:12:32.486 }, 00:12:32.486 "claimed": false, 00:12:32.486 "zoned": false, 00:12:32.486 "supported_io_types": { 00:12:32.486 "read": true, 00:12:32.486 "write": true, 00:12:32.486 "unmap": false, 00:12:32.486 "flush": false, 00:12:32.486 "reset": true, 00:12:32.486 "nvme_admin": false, 00:12:32.486 "nvme_io": false, 00:12:32.486 "nvme_io_md": false, 00:12:32.486 "write_zeroes": true, 00:12:32.486 "zcopy": false, 00:12:32.486 "get_zone_info": false, 00:12:32.486 "zone_management": false, 00:12:32.486 "zone_append": false, 00:12:32.486 "compare": false, 00:12:32.486 "compare_and_write": false, 00:12:32.486 "abort": false, 00:12:32.486 "seek_hole": false, 00:12:32.486 "seek_data": false, 00:12:32.486 "copy": false, 00:12:32.486 "nvme_iov_md": false 00:12:32.486 }, 00:12:32.486 "memory_domains": [ 00:12:32.486 { 00:12:32.486 "dma_device_id": "system", 00:12:32.486 "dma_device_type": 1 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.486 "dma_device_type": 2 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "dma_device_id": "system", 00:12:32.486 "dma_device_type": 1 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.486 "dma_device_type": 2 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "dma_device_id": "system", 00:12:32.486 "dma_device_type": 1 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.486 "dma_device_type": 2 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "dma_device_id": "system", 00:12:32.486 "dma_device_type": 1 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.486 "dma_device_type": 2 00:12:32.486 } 00:12:32.486 ], 00:12:32.486 "driver_specific": { 00:12:32.486 "raid": { 00:12:32.486 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:32.486 "strip_size_kb": 0, 00:12:32.486 "state": "online", 00:12:32.486 "raid_level": "raid1", 00:12:32.486 "superblock": true, 00:12:32.486 "num_base_bdevs": 4, 00:12:32.486 "num_base_bdevs_discovered": 4, 00:12:32.486 "num_base_bdevs_operational": 4, 00:12:32.486 "base_bdevs_list": [ 00:12:32.486 { 00:12:32.486 "name": "pt1", 00:12:32.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.486 "is_configured": true, 00:12:32.486 "data_offset": 2048, 00:12:32.486 "data_size": 63488 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "name": "pt2", 00:12:32.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.486 "is_configured": true, 00:12:32.486 "data_offset": 2048, 00:12:32.486 "data_size": 63488 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "name": "pt3", 00:12:32.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.486 "is_configured": true, 00:12:32.486 "data_offset": 2048, 00:12:32.486 "data_size": 63488 00:12:32.486 }, 00:12:32.486 { 00:12:32.486 "name": "pt4", 00:12:32.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.486 "is_configured": true, 00:12:32.486 "data_offset": 2048, 00:12:32.486 "data_size": 63488 00:12:32.486 } 00:12:32.486 ] 00:12:32.486 } 00:12:32.486 } 00:12:32.486 }' 00:12:32.486 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:32.486 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:32.486 pt2 00:12:32.486 pt3 00:12:32.486 pt4' 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.487 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.747 09:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.747 [2024-11-20 09:24:57.989805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3a049ddc-7985-4942-887d-fd0602494b8d '!=' 3a049ddc-7985-4942-887d-fd0602494b8d ']' 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.747 [2024-11-20 09:24:58.033460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.747 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.748 "name": "raid_bdev1", 00:12:32.748 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:32.748 "strip_size_kb": 0, 00:12:32.748 "state": "online", 00:12:32.748 "raid_level": "raid1", 00:12:32.748 "superblock": true, 00:12:32.748 "num_base_bdevs": 4, 00:12:32.748 "num_base_bdevs_discovered": 3, 00:12:32.748 "num_base_bdevs_operational": 3, 00:12:32.748 "base_bdevs_list": [ 00:12:32.748 { 00:12:32.748 "name": null, 00:12:32.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.748 "is_configured": false, 00:12:32.748 "data_offset": 0, 00:12:32.748 "data_size": 63488 00:12:32.748 }, 00:12:32.748 { 00:12:32.748 "name": "pt2", 00:12:32.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.748 "is_configured": true, 00:12:32.748 "data_offset": 2048, 00:12:32.748 "data_size": 63488 00:12:32.748 }, 00:12:32.748 { 00:12:32.748 "name": "pt3", 00:12:32.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.748 "is_configured": true, 00:12:32.748 "data_offset": 2048, 00:12:32.748 "data_size": 63488 00:12:32.748 }, 00:12:32.748 { 00:12:32.748 "name": "pt4", 00:12:32.748 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.748 "is_configured": true, 00:12:32.748 "data_offset": 2048, 00:12:32.748 "data_size": 63488 00:12:32.748 } 00:12:32.748 ] 00:12:32.748 }' 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.748 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 [2024-11-20 09:24:58.508594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.317 [2024-11-20 09:24:58.508646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.317 [2024-11-20 09:24:58.508776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.317 [2024-11-20 09:24:58.508891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.317 [2024-11-20 09:24:58.508905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:33.317 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.318 [2024-11-20 09:24:58.604415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:33.318 [2024-11-20 09:24:58.604525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.318 [2024-11-20 09:24:58.604553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:33.318 [2024-11-20 09:24:58.604564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.318 [2024-11-20 09:24:58.607563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.318 [2024-11-20 09:24:58.607618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:33.318 [2024-11-20 09:24:58.607746] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:33.318 [2024-11-20 09:24:58.607819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.318 pt2 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.318 "name": "raid_bdev1", 00:12:33.318 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:33.318 "strip_size_kb": 0, 00:12:33.318 "state": "configuring", 00:12:33.318 "raid_level": "raid1", 00:12:33.318 "superblock": true, 00:12:33.318 "num_base_bdevs": 4, 00:12:33.318 "num_base_bdevs_discovered": 1, 00:12:33.318 "num_base_bdevs_operational": 3, 00:12:33.318 "base_bdevs_list": [ 00:12:33.318 { 00:12:33.318 "name": null, 00:12:33.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.318 "is_configured": false, 00:12:33.318 "data_offset": 2048, 00:12:33.318 "data_size": 63488 00:12:33.318 }, 00:12:33.318 { 00:12:33.318 "name": "pt2", 00:12:33.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.318 "is_configured": true, 00:12:33.318 "data_offset": 2048, 00:12:33.318 "data_size": 63488 00:12:33.318 }, 00:12:33.318 { 00:12:33.318 "name": null, 00:12:33.318 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.318 "is_configured": false, 00:12:33.318 "data_offset": 2048, 00:12:33.318 "data_size": 63488 00:12:33.318 }, 00:12:33.318 { 00:12:33.318 "name": null, 00:12:33.318 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:33.318 "is_configured": false, 00:12:33.318 "data_offset": 2048, 00:12:33.318 "data_size": 63488 00:12:33.318 } 00:12:33.318 ] 00:12:33.318 }' 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.318 09:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 [2024-11-20 09:24:59.055758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:33.904 [2024-11-20 09:24:59.055876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.904 [2024-11-20 09:24:59.055907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:33.904 [2024-11-20 09:24:59.055919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.904 [2024-11-20 09:24:59.056576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.904 [2024-11-20 09:24:59.056613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:33.904 [2024-11-20 09:24:59.056744] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:33.904 [2024-11-20 09:24:59.056780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:33.904 pt3 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.904 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.905 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.905 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.905 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.905 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.905 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.905 "name": "raid_bdev1", 00:12:33.905 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:33.905 "strip_size_kb": 0, 00:12:33.905 "state": "configuring", 00:12:33.905 "raid_level": "raid1", 00:12:33.905 "superblock": true, 00:12:33.905 "num_base_bdevs": 4, 00:12:33.905 "num_base_bdevs_discovered": 2, 00:12:33.905 "num_base_bdevs_operational": 3, 00:12:33.905 "base_bdevs_list": [ 00:12:33.905 { 00:12:33.905 "name": null, 00:12:33.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.905 "is_configured": false, 00:12:33.905 "data_offset": 2048, 00:12:33.905 "data_size": 63488 00:12:33.905 }, 00:12:33.905 { 00:12:33.905 "name": "pt2", 00:12:33.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.905 "is_configured": true, 00:12:33.905 "data_offset": 2048, 00:12:33.905 "data_size": 63488 00:12:33.905 }, 00:12:33.905 { 00:12:33.905 "name": "pt3", 00:12:33.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.905 "is_configured": true, 00:12:33.905 "data_offset": 2048, 00:12:33.905 "data_size": 63488 00:12:33.905 }, 00:12:33.905 { 00:12:33.905 "name": null, 00:12:33.905 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:33.905 "is_configured": false, 00:12:33.905 "data_offset": 2048, 00:12:33.905 "data_size": 63488 00:12:33.905 } 00:12:33.905 ] 00:12:33.905 }' 00:12:33.905 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.905 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.165 [2024-11-20 09:24:59.495059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:34.165 [2024-11-20 09:24:59.495164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.165 [2024-11-20 09:24:59.495195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:34.165 [2024-11-20 09:24:59.495206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.165 [2024-11-20 09:24:59.495858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.165 [2024-11-20 09:24:59.495889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:34.165 [2024-11-20 09:24:59.496000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:34.165 [2024-11-20 09:24:59.496045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:34.165 [2024-11-20 09:24:59.496226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:34.165 [2024-11-20 09:24:59.496243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.165 [2024-11-20 09:24:59.496570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:34.165 [2024-11-20 09:24:59.496761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:34.165 [2024-11-20 09:24:59.496797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:34.165 [2024-11-20 09:24:59.496978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.165 pt4 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.165 "name": "raid_bdev1", 00:12:34.165 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:34.165 "strip_size_kb": 0, 00:12:34.165 "state": "online", 00:12:34.165 "raid_level": "raid1", 00:12:34.165 "superblock": true, 00:12:34.165 "num_base_bdevs": 4, 00:12:34.165 "num_base_bdevs_discovered": 3, 00:12:34.165 "num_base_bdevs_operational": 3, 00:12:34.165 "base_bdevs_list": [ 00:12:34.165 { 00:12:34.165 "name": null, 00:12:34.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.165 "is_configured": false, 00:12:34.165 "data_offset": 2048, 00:12:34.165 "data_size": 63488 00:12:34.165 }, 00:12:34.165 { 00:12:34.165 "name": "pt2", 00:12:34.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.165 "is_configured": true, 00:12:34.165 "data_offset": 2048, 00:12:34.165 "data_size": 63488 00:12:34.165 }, 00:12:34.165 { 00:12:34.165 "name": "pt3", 00:12:34.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.165 "is_configured": true, 00:12:34.165 "data_offset": 2048, 00:12:34.165 "data_size": 63488 00:12:34.165 }, 00:12:34.165 { 00:12:34.165 "name": "pt4", 00:12:34.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:34.165 "is_configured": true, 00:12:34.165 "data_offset": 2048, 00:12:34.165 "data_size": 63488 00:12:34.165 } 00:12:34.165 ] 00:12:34.165 }' 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.165 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.733 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.733 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.733 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.733 [2024-11-20 09:24:59.930224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.733 [2024-11-20 09:24:59.930274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.734 [2024-11-20 09:24:59.930383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.734 [2024-11-20 09:24:59.930495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.734 [2024-11-20 09:24:59.930514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.734 09:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.734 [2024-11-20 09:25:00.002044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:34.734 [2024-11-20 09:25:00.002112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.734 [2024-11-20 09:25:00.002134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:34.734 [2024-11-20 09:25:00.002148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.734 [2024-11-20 09:25:00.004792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.734 [2024-11-20 09:25:00.004831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:34.734 [2024-11-20 09:25:00.004919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:34.734 [2024-11-20 09:25:00.004977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:34.734 [2024-11-20 09:25:00.005115] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:34.734 [2024-11-20 09:25:00.005134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.734 [2024-11-20 09:25:00.005159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:34.734 [2024-11-20 09:25:00.005220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:34.734 [2024-11-20 09:25:00.005328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:34.734 pt1 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.734 "name": "raid_bdev1", 00:12:34.734 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:34.734 "strip_size_kb": 0, 00:12:34.734 "state": "configuring", 00:12:34.734 "raid_level": "raid1", 00:12:34.734 "superblock": true, 00:12:34.734 "num_base_bdevs": 4, 00:12:34.734 "num_base_bdevs_discovered": 2, 00:12:34.734 "num_base_bdevs_operational": 3, 00:12:34.734 "base_bdevs_list": [ 00:12:34.734 { 00:12:34.734 "name": null, 00:12:34.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.734 "is_configured": false, 00:12:34.734 "data_offset": 2048, 00:12:34.734 "data_size": 63488 00:12:34.734 }, 00:12:34.734 { 00:12:34.734 "name": "pt2", 00:12:34.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.734 "is_configured": true, 00:12:34.734 "data_offset": 2048, 00:12:34.734 "data_size": 63488 00:12:34.734 }, 00:12:34.734 { 00:12:34.734 "name": "pt3", 00:12:34.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.734 "is_configured": true, 00:12:34.734 "data_offset": 2048, 00:12:34.734 "data_size": 63488 00:12:34.734 }, 00:12:34.734 { 00:12:34.734 "name": null, 00:12:34.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:34.734 "is_configured": false, 00:12:34.734 "data_offset": 2048, 00:12:34.734 "data_size": 63488 00:12:34.734 } 00:12:34.734 ] 00:12:34.734 }' 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.734 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.302 [2024-11-20 09:25:00.537236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:35.302 [2024-11-20 09:25:00.537339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.302 [2024-11-20 09:25:00.537368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:35.302 [2024-11-20 09:25:00.537378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.302 [2024-11-20 09:25:00.537959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.302 [2024-11-20 09:25:00.537991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:35.302 [2024-11-20 09:25:00.538102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:35.302 [2024-11-20 09:25:00.538149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:35.302 [2024-11-20 09:25:00.538319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:35.302 [2024-11-20 09:25:00.538336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:35.302 [2024-11-20 09:25:00.538675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:35.302 [2024-11-20 09:25:00.538882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:35.302 [2024-11-20 09:25:00.538902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:35.302 [2024-11-20 09:25:00.539069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.302 pt4 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.302 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.303 "name": "raid_bdev1", 00:12:35.303 "uuid": "3a049ddc-7985-4942-887d-fd0602494b8d", 00:12:35.303 "strip_size_kb": 0, 00:12:35.303 "state": "online", 00:12:35.303 "raid_level": "raid1", 00:12:35.303 "superblock": true, 00:12:35.303 "num_base_bdevs": 4, 00:12:35.303 "num_base_bdevs_discovered": 3, 00:12:35.303 "num_base_bdevs_operational": 3, 00:12:35.303 "base_bdevs_list": [ 00:12:35.303 { 00:12:35.303 "name": null, 00:12:35.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.303 "is_configured": false, 00:12:35.303 "data_offset": 2048, 00:12:35.303 "data_size": 63488 00:12:35.303 }, 00:12:35.303 { 00:12:35.303 "name": "pt2", 00:12:35.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.303 "is_configured": true, 00:12:35.303 "data_offset": 2048, 00:12:35.303 "data_size": 63488 00:12:35.303 }, 00:12:35.303 { 00:12:35.303 "name": "pt3", 00:12:35.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.303 "is_configured": true, 00:12:35.303 "data_offset": 2048, 00:12:35.303 "data_size": 63488 00:12:35.303 }, 00:12:35.303 { 00:12:35.303 "name": "pt4", 00:12:35.303 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:35.303 "is_configured": true, 00:12:35.303 "data_offset": 2048, 00:12:35.303 "data_size": 63488 00:12:35.303 } 00:12:35.303 ] 00:12:35.303 }' 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.303 09:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.563 09:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:35.563 09:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:35.563 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.563 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.822 [2024-11-20 09:25:01.060679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3a049ddc-7985-4942-887d-fd0602494b8d '!=' 3a049ddc-7985-4942-887d-fd0602494b8d ']' 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74909 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74909 ']' 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74909 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74909 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.822 killing process with pid 74909 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74909' 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74909 00:12:35.822 [2024-11-20 09:25:01.140304] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.822 09:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74909 00:12:35.823 [2024-11-20 09:25:01.140465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.823 [2024-11-20 09:25:01.140559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.823 [2024-11-20 09:25:01.140578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:36.393 [2024-11-20 09:25:01.601885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.783 09:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:37.783 00:12:37.783 real 0m9.134s 00:12:37.783 user 0m14.233s 00:12:37.783 sys 0m1.715s 00:12:37.783 09:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.783 09:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.783 ************************************ 00:12:37.783 END TEST raid_superblock_test 00:12:37.783 ************************************ 00:12:37.783 09:25:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:37.783 09:25:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:37.783 09:25:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.783 09:25:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.783 ************************************ 00:12:37.783 START TEST raid_read_error_test 00:12:37.783 ************************************ 00:12:37.783 09:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:37.783 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:37.783 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pS07r54S8d 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75407 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75407 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75407 ']' 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.784 09:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.784 [2024-11-20 09:25:03.081595] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:37.784 [2024-11-20 09:25:03.082092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75407 ] 00:12:38.042 [2024-11-20 09:25:03.255144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.042 [2024-11-20 09:25:03.404280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.301 [2024-11-20 09:25:03.670035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.301 [2024-11-20 09:25:03.670088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.560 09:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.560 09:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:38.560 09:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.560 09:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.560 09:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.560 09:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.819 BaseBdev1_malloc 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.819 true 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.819 [2024-11-20 09:25:04.034257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:38.819 [2024-11-20 09:25:04.034330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.819 [2024-11-20 09:25:04.034353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:38.819 [2024-11-20 09:25:04.034367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.819 [2024-11-20 09:25:04.037013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.819 [2024-11-20 09:25:04.037059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.819 BaseBdev1 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.819 BaseBdev2_malloc 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.819 true 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.819 [2024-11-20 09:25:04.113847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:38.819 [2024-11-20 09:25:04.113918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.819 [2024-11-20 09:25:04.113938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:38.819 [2024-11-20 09:25:04.113952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.819 [2024-11-20 09:25:04.116536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.819 [2024-11-20 09:25:04.116579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.819 BaseBdev2 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.819 BaseBdev3_malloc 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.819 true 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.819 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.820 [2024-11-20 09:25:04.205132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:38.820 [2024-11-20 09:25:04.205210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.820 [2024-11-20 09:25:04.205233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:38.820 [2024-11-20 09:25:04.205246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.820 [2024-11-20 09:25:04.207848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.820 [2024-11-20 09:25:04.207887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:38.820 BaseBdev3 00:12:38.820 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.820 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.820 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:38.820 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.820 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.820 BaseBdev4_malloc 00:12:38.820 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.820 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:38.820 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.820 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.079 true 00:12:39.079 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.079 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:39.079 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.079 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.079 [2024-11-20 09:25:04.282884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:39.079 [2024-11-20 09:25:04.282952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.079 [2024-11-20 09:25:04.282972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:39.079 [2024-11-20 09:25:04.282984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.079 [2024-11-20 09:25:04.285560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.079 [2024-11-20 09:25:04.285600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:39.080 BaseBdev4 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.080 [2024-11-20 09:25:04.294912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.080 [2024-11-20 09:25:04.297202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.080 [2024-11-20 09:25:04.297299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.080 [2024-11-20 09:25:04.297375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:39.080 [2024-11-20 09:25:04.297661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:39.080 [2024-11-20 09:25:04.297682] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.080 [2024-11-20 09:25:04.297965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:39.080 [2024-11-20 09:25:04.298168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:39.080 [2024-11-20 09:25:04.298183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:39.080 [2024-11-20 09:25:04.298365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.080 "name": "raid_bdev1", 00:12:39.080 "uuid": "849c132b-50d5-4f2d-b31c-238cb501cf6d", 00:12:39.080 "strip_size_kb": 0, 00:12:39.080 "state": "online", 00:12:39.080 "raid_level": "raid1", 00:12:39.080 "superblock": true, 00:12:39.080 "num_base_bdevs": 4, 00:12:39.080 "num_base_bdevs_discovered": 4, 00:12:39.080 "num_base_bdevs_operational": 4, 00:12:39.080 "base_bdevs_list": [ 00:12:39.080 { 00:12:39.080 "name": "BaseBdev1", 00:12:39.080 "uuid": "3c38d7a0-fb5b-5027-bddc-25bfe4d24550", 00:12:39.080 "is_configured": true, 00:12:39.080 "data_offset": 2048, 00:12:39.080 "data_size": 63488 00:12:39.080 }, 00:12:39.080 { 00:12:39.080 "name": "BaseBdev2", 00:12:39.080 "uuid": "51eb152f-d86d-58ce-b27d-c003b3310d6c", 00:12:39.080 "is_configured": true, 00:12:39.080 "data_offset": 2048, 00:12:39.080 "data_size": 63488 00:12:39.080 }, 00:12:39.080 { 00:12:39.080 "name": "BaseBdev3", 00:12:39.080 "uuid": "70b5d436-4a5a-5f1c-bee8-b01c304194ae", 00:12:39.080 "is_configured": true, 00:12:39.080 "data_offset": 2048, 00:12:39.080 "data_size": 63488 00:12:39.080 }, 00:12:39.080 { 00:12:39.080 "name": "BaseBdev4", 00:12:39.080 "uuid": "d58cccc8-5247-562a-b9d8-afc10d2348e7", 00:12:39.080 "is_configured": true, 00:12:39.080 "data_offset": 2048, 00:12:39.080 "data_size": 63488 00:12:39.080 } 00:12:39.080 ] 00:12:39.080 }' 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.080 09:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.339 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:39.339 09:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:39.598 [2024-11-20 09:25:04.811411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.538 "name": "raid_bdev1", 00:12:40.538 "uuid": "849c132b-50d5-4f2d-b31c-238cb501cf6d", 00:12:40.538 "strip_size_kb": 0, 00:12:40.538 "state": "online", 00:12:40.538 "raid_level": "raid1", 00:12:40.538 "superblock": true, 00:12:40.538 "num_base_bdevs": 4, 00:12:40.538 "num_base_bdevs_discovered": 4, 00:12:40.538 "num_base_bdevs_operational": 4, 00:12:40.538 "base_bdevs_list": [ 00:12:40.538 { 00:12:40.538 "name": "BaseBdev1", 00:12:40.538 "uuid": "3c38d7a0-fb5b-5027-bddc-25bfe4d24550", 00:12:40.538 "is_configured": true, 00:12:40.538 "data_offset": 2048, 00:12:40.538 "data_size": 63488 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "name": "BaseBdev2", 00:12:40.538 "uuid": "51eb152f-d86d-58ce-b27d-c003b3310d6c", 00:12:40.538 "is_configured": true, 00:12:40.538 "data_offset": 2048, 00:12:40.538 "data_size": 63488 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "name": "BaseBdev3", 00:12:40.538 "uuid": "70b5d436-4a5a-5f1c-bee8-b01c304194ae", 00:12:40.538 "is_configured": true, 00:12:40.538 "data_offset": 2048, 00:12:40.538 "data_size": 63488 00:12:40.538 }, 00:12:40.538 { 00:12:40.538 "name": "BaseBdev4", 00:12:40.538 "uuid": "d58cccc8-5247-562a-b9d8-afc10d2348e7", 00:12:40.538 "is_configured": true, 00:12:40.538 "data_offset": 2048, 00:12:40.538 "data_size": 63488 00:12:40.538 } 00:12:40.538 ] 00:12:40.538 }' 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.538 09:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.797 [2024-11-20 09:25:06.218199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.797 [2024-11-20 09:25:06.218336] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.797 [2024-11-20 09:25:06.221268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.797 [2024-11-20 09:25:06.221390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.797 [2024-11-20 09:25:06.221557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.797 [2024-11-20 09:25:06.221640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:40.797 { 00:12:40.797 "results": [ 00:12:40.797 { 00:12:40.797 "job": "raid_bdev1", 00:12:40.797 "core_mask": "0x1", 00:12:40.797 "workload": "randrw", 00:12:40.797 "percentage": 50, 00:12:40.797 "status": "finished", 00:12:40.797 "queue_depth": 1, 00:12:40.797 "io_size": 131072, 00:12:40.797 "runtime": 1.407548, 00:12:40.797 "iops": 7500.277077584566, 00:12:40.797 "mibps": 937.5346346980707, 00:12:40.797 "io_failed": 0, 00:12:40.797 "io_timeout": 0, 00:12:40.797 "avg_latency_us": 130.52152618784368, 00:12:40.797 "min_latency_us": 24.258515283842794, 00:12:40.797 "max_latency_us": 1616.9362445414847 00:12:40.797 } 00:12:40.797 ], 00:12:40.797 "core_count": 1 00:12:40.797 } 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75407 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75407 ']' 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75407 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.797 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75407 00:12:41.057 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.057 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.057 killing process with pid 75407 00:12:41.057 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75407' 00:12:41.057 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75407 00:12:41.057 [2024-11-20 09:25:06.263347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.057 09:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75407 00:12:41.317 [2024-11-20 09:25:06.634514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pS07r54S8d 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:42.697 ************************************ 00:12:42.697 END TEST raid_read_error_test 00:12:42.697 ************************************ 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:42.697 00:12:42.697 real 0m5.016s 00:12:42.697 user 0m5.776s 00:12:42.697 sys 0m0.698s 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.697 09:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.697 09:25:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:42.697 09:25:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:42.697 09:25:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.697 09:25:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.697 ************************************ 00:12:42.697 START TEST raid_write_error_test 00:12:42.697 ************************************ 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gXvrrU87pe 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75553 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75553 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75553 ']' 00:12:42.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.697 09:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.956 [2024-11-20 09:25:08.170603] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:42.956 [2024-11-20 09:25:08.170725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75553 ] 00:12:42.956 [2024-11-20 09:25:08.349037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.215 [2024-11-20 09:25:08.499866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.488 [2024-11-20 09:25:08.780842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.488 [2024-11-20 09:25:08.780913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.749 BaseBdev1_malloc 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.749 true 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.749 [2024-11-20 09:25:09.091911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:43.749 [2024-11-20 09:25:09.092025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.749 [2024-11-20 09:25:09.092052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:43.749 [2024-11-20 09:25:09.092063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.749 [2024-11-20 09:25:09.094331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.749 [2024-11-20 09:25:09.094374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:43.749 BaseBdev1 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.749 BaseBdev2_malloc 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.749 true 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.749 [2024-11-20 09:25:09.158500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:43.749 [2024-11-20 09:25:09.158558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.749 [2024-11-20 09:25:09.158577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:43.749 [2024-11-20 09:25:09.158588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.749 [2024-11-20 09:25:09.160887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.749 [2024-11-20 09:25:09.160929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:43.749 BaseBdev2 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.749 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.008 BaseBdev3_malloc 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.008 true 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.008 [2024-11-20 09:25:09.240372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:44.008 [2024-11-20 09:25:09.240448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.008 [2024-11-20 09:25:09.240470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:44.008 [2024-11-20 09:25:09.240482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.008 [2024-11-20 09:25:09.242672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.008 [2024-11-20 09:25:09.242772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.008 BaseBdev3 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.008 BaseBdev4_malloc 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.008 true 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.008 [2024-11-20 09:25:09.307710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:44.008 [2024-11-20 09:25:09.307766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.008 [2024-11-20 09:25:09.307786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:44.008 [2024-11-20 09:25:09.307796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.008 [2024-11-20 09:25:09.310016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.008 [2024-11-20 09:25:09.310145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:44.008 BaseBdev4 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.008 [2024-11-20 09:25:09.319780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.008 [2024-11-20 09:25:09.321810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.008 [2024-11-20 09:25:09.321899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.008 [2024-11-20 09:25:09.321973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.008 [2024-11-20 09:25:09.322221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:44.008 [2024-11-20 09:25:09.322238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:44.008 [2024-11-20 09:25:09.322523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:44.008 [2024-11-20 09:25:09.322727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:44.008 [2024-11-20 09:25:09.322742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:44.008 [2024-11-20 09:25:09.322913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.008 "name": "raid_bdev1", 00:12:44.008 "uuid": "b2b213ec-7cb9-49c1-ab23-79f9216f5dcc", 00:12:44.008 "strip_size_kb": 0, 00:12:44.008 "state": "online", 00:12:44.008 "raid_level": "raid1", 00:12:44.008 "superblock": true, 00:12:44.008 "num_base_bdevs": 4, 00:12:44.008 "num_base_bdevs_discovered": 4, 00:12:44.008 "num_base_bdevs_operational": 4, 00:12:44.008 "base_bdevs_list": [ 00:12:44.008 { 00:12:44.008 "name": "BaseBdev1", 00:12:44.008 "uuid": "446f2cca-fefc-59c6-aa78-ed63d50e719e", 00:12:44.008 "is_configured": true, 00:12:44.008 "data_offset": 2048, 00:12:44.008 "data_size": 63488 00:12:44.008 }, 00:12:44.008 { 00:12:44.008 "name": "BaseBdev2", 00:12:44.008 "uuid": "1a324f83-3550-59ee-babd-cbc681ce4e70", 00:12:44.008 "is_configured": true, 00:12:44.008 "data_offset": 2048, 00:12:44.008 "data_size": 63488 00:12:44.008 }, 00:12:44.008 { 00:12:44.008 "name": "BaseBdev3", 00:12:44.008 "uuid": "55ce91b9-c091-544b-95c0-4285f97b0548", 00:12:44.008 "is_configured": true, 00:12:44.008 "data_offset": 2048, 00:12:44.008 "data_size": 63488 00:12:44.008 }, 00:12:44.008 { 00:12:44.008 "name": "BaseBdev4", 00:12:44.008 "uuid": "5a990dbb-8e85-57eb-944f-ad0c60f6e3a2", 00:12:44.008 "is_configured": true, 00:12:44.008 "data_offset": 2048, 00:12:44.008 "data_size": 63488 00:12:44.008 } 00:12:44.008 ] 00:12:44.008 }' 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.008 09:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.266 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:44.266 09:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:44.526 [2024-11-20 09:25:09.805015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.465 [2024-11-20 09:25:10.719725] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:45.465 [2024-11-20 09:25:10.719792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.465 [2024-11-20 09:25:10.720038] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.465 09:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.466 09:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.466 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.466 09:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.466 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.466 "name": "raid_bdev1", 00:12:45.466 "uuid": "b2b213ec-7cb9-49c1-ab23-79f9216f5dcc", 00:12:45.466 "strip_size_kb": 0, 00:12:45.466 "state": "online", 00:12:45.466 "raid_level": "raid1", 00:12:45.466 "superblock": true, 00:12:45.466 "num_base_bdevs": 4, 00:12:45.466 "num_base_bdevs_discovered": 3, 00:12:45.466 "num_base_bdevs_operational": 3, 00:12:45.466 "base_bdevs_list": [ 00:12:45.466 { 00:12:45.466 "name": null, 00:12:45.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.466 "is_configured": false, 00:12:45.466 "data_offset": 0, 00:12:45.466 "data_size": 63488 00:12:45.466 }, 00:12:45.466 { 00:12:45.466 "name": "BaseBdev2", 00:12:45.466 "uuid": "1a324f83-3550-59ee-babd-cbc681ce4e70", 00:12:45.466 "is_configured": true, 00:12:45.466 "data_offset": 2048, 00:12:45.466 "data_size": 63488 00:12:45.466 }, 00:12:45.466 { 00:12:45.466 "name": "BaseBdev3", 00:12:45.466 "uuid": "55ce91b9-c091-544b-95c0-4285f97b0548", 00:12:45.466 "is_configured": true, 00:12:45.466 "data_offset": 2048, 00:12:45.466 "data_size": 63488 00:12:45.466 }, 00:12:45.466 { 00:12:45.466 "name": "BaseBdev4", 00:12:45.466 "uuid": "5a990dbb-8e85-57eb-944f-ad0c60f6e3a2", 00:12:45.466 "is_configured": true, 00:12:45.466 "data_offset": 2048, 00:12:45.466 "data_size": 63488 00:12:45.466 } 00:12:45.466 ] 00:12:45.466 }' 00:12:45.466 09:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.466 09:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.038 [2024-11-20 09:25:11.188101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.038 [2024-11-20 09:25:11.188138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.038 [2024-11-20 09:25:11.191297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.038 [2024-11-20 09:25:11.191349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.038 [2024-11-20 09:25:11.191469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.038 [2024-11-20 09:25:11.191495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:46.038 { 00:12:46.038 "results": [ 00:12:46.038 { 00:12:46.038 "job": "raid_bdev1", 00:12:46.038 "core_mask": "0x1", 00:12:46.038 "workload": "randrw", 00:12:46.038 "percentage": 50, 00:12:46.038 "status": "finished", 00:12:46.038 "queue_depth": 1, 00:12:46.038 "io_size": 131072, 00:12:46.038 "runtime": 1.383856, 00:12:46.038 "iops": 10713.542449503417, 00:12:46.038 "mibps": 1339.192806187927, 00:12:46.038 "io_failed": 0, 00:12:46.038 "io_timeout": 0, 00:12:46.038 "avg_latency_us": 90.42409787597263, 00:12:46.038 "min_latency_us": 24.258515283842794, 00:12:46.038 "max_latency_us": 1681.3275109170306 00:12:46.038 } 00:12:46.038 ], 00:12:46.038 "core_count": 1 00:12:46.038 } 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75553 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75553 ']' 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75553 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75553 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75553' 00:12:46.038 killing process with pid 75553 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75553 00:12:46.038 09:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75553 00:12:46.038 [2024-11-20 09:25:11.222644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.300 [2024-11-20 09:25:11.585811] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gXvrrU87pe 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:47.680 ************************************ 00:12:47.680 END TEST raid_write_error_test 00:12:47.680 ************************************ 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:47.680 00:12:47.680 real 0m4.811s 00:12:47.680 user 0m5.575s 00:12:47.680 sys 0m0.596s 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.680 09:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.680 09:25:12 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:47.680 09:25:12 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:47.680 09:25:12 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:47.680 09:25:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:47.680 09:25:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.680 09:25:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.680 ************************************ 00:12:47.680 START TEST raid_rebuild_test 00:12:47.680 ************************************ 00:12:47.680 09:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75702 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75702 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75702 ']' 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.681 09:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.681 [2024-11-20 09:25:13.046582] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:12:47.681 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:47.681 Zero copy mechanism will not be used. 00:12:47.681 [2024-11-20 09:25:13.047101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75702 ] 00:12:47.940 [2024-11-20 09:25:13.202654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.940 [2024-11-20 09:25:13.351561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.199 [2024-11-20 09:25:13.555042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.199 [2024-11-20 09:25:13.555112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.458 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.458 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:48.458 09:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.458 09:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:48.458 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.458 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.718 BaseBdev1_malloc 00:12:48.718 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.718 09:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:48.718 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.718 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.718 [2024-11-20 09:25:13.964504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:48.718 [2024-11-20 09:25:13.964663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.718 [2024-11-20 09:25:13.964702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:48.718 [2024-11-20 09:25:13.964719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.718 [2024-11-20 09:25:13.967466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.718 [2024-11-20 09:25:13.967510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:48.718 BaseBdev1 00:12:48.718 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.718 09:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.718 09:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:48.718 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.719 09:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.719 BaseBdev2_malloc 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.719 [2024-11-20 09:25:14.026281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:48.719 [2024-11-20 09:25:14.026369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.719 [2024-11-20 09:25:14.026393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:48.719 [2024-11-20 09:25:14.026406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.719 [2024-11-20 09:25:14.028925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.719 [2024-11-20 09:25:14.028969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:48.719 BaseBdev2 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.719 spare_malloc 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.719 spare_delay 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.719 [2024-11-20 09:25:14.107250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:48.719 [2024-11-20 09:25:14.107367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.719 [2024-11-20 09:25:14.107394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:48.719 [2024-11-20 09:25:14.107405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.719 [2024-11-20 09:25:14.109798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.719 [2024-11-20 09:25:14.109842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:48.719 spare 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.719 [2024-11-20 09:25:14.119287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.719 [2024-11-20 09:25:14.121340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.719 [2024-11-20 09:25:14.121433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:48.719 [2024-11-20 09:25:14.121447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:48.719 [2024-11-20 09:25:14.121775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:48.719 [2024-11-20 09:25:14.121958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:48.719 [2024-11-20 09:25:14.121971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:48.719 [2024-11-20 09:25:14.122159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.719 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.979 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.979 "name": "raid_bdev1", 00:12:48.979 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:48.979 "strip_size_kb": 0, 00:12:48.979 "state": "online", 00:12:48.979 "raid_level": "raid1", 00:12:48.979 "superblock": false, 00:12:48.979 "num_base_bdevs": 2, 00:12:48.979 "num_base_bdevs_discovered": 2, 00:12:48.979 "num_base_bdevs_operational": 2, 00:12:48.979 "base_bdevs_list": [ 00:12:48.979 { 00:12:48.979 "name": "BaseBdev1", 00:12:48.979 "uuid": "e4c757c7-69a5-5bea-9222-65120d451e6d", 00:12:48.979 "is_configured": true, 00:12:48.979 "data_offset": 0, 00:12:48.979 "data_size": 65536 00:12:48.979 }, 00:12:48.979 { 00:12:48.979 "name": "BaseBdev2", 00:12:48.979 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:48.979 "is_configured": true, 00:12:48.979 "data_offset": 0, 00:12:48.979 "data_size": 65536 00:12:48.979 } 00:12:48.979 ] 00:12:48.979 }' 00:12:48.979 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.979 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.239 [2024-11-20 09:25:14.574815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:49.239 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:49.499 [2024-11-20 09:25:14.850084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:49.499 /dev/nbd0 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.499 1+0 records in 00:12:49.499 1+0 records out 00:12:49.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640427 s, 6.4 MB/s 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:49.499 09:25:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:53.694 65536+0 records in 00:12:53.694 65536+0 records out 00:12:53.694 33554432 bytes (34 MB, 32 MiB) copied, 4.13837 s, 8.1 MB/s 00:12:53.694 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:53.694 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.694 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:53.694 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.694 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:53.694 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.694 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.952 [2024-11-20 09:25:19.283501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.952 [2024-11-20 09:25:19.300182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.952 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.952 "name": "raid_bdev1", 00:12:53.952 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:53.952 "strip_size_kb": 0, 00:12:53.952 "state": "online", 00:12:53.952 "raid_level": "raid1", 00:12:53.952 "superblock": false, 00:12:53.952 "num_base_bdevs": 2, 00:12:53.952 "num_base_bdevs_discovered": 1, 00:12:53.952 "num_base_bdevs_operational": 1, 00:12:53.952 "base_bdevs_list": [ 00:12:53.952 { 00:12:53.952 "name": null, 00:12:53.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.952 "is_configured": false, 00:12:53.952 "data_offset": 0, 00:12:53.952 "data_size": 65536 00:12:53.952 }, 00:12:53.952 { 00:12:53.952 "name": "BaseBdev2", 00:12:53.952 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:53.952 "is_configured": true, 00:12:53.952 "data_offset": 0, 00:12:53.953 "data_size": 65536 00:12:53.953 } 00:12:53.953 ] 00:12:53.953 }' 00:12:53.953 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.953 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.521 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:54.521 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.521 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.521 [2024-11-20 09:25:19.735787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.521 [2024-11-20 09:25:19.753638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:54.521 09:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.521 09:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:54.521 [2024-11-20 09:25:19.755626] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.480 "name": "raid_bdev1", 00:12:55.480 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:55.480 "strip_size_kb": 0, 00:12:55.480 "state": "online", 00:12:55.480 "raid_level": "raid1", 00:12:55.480 "superblock": false, 00:12:55.480 "num_base_bdevs": 2, 00:12:55.480 "num_base_bdevs_discovered": 2, 00:12:55.480 "num_base_bdevs_operational": 2, 00:12:55.480 "process": { 00:12:55.480 "type": "rebuild", 00:12:55.480 "target": "spare", 00:12:55.480 "progress": { 00:12:55.480 "blocks": 20480, 00:12:55.480 "percent": 31 00:12:55.480 } 00:12:55.480 }, 00:12:55.480 "base_bdevs_list": [ 00:12:55.480 { 00:12:55.480 "name": "spare", 00:12:55.480 "uuid": "a70e2353-06b7-5cd2-9abb-be4764ec51db", 00:12:55.480 "is_configured": true, 00:12:55.480 "data_offset": 0, 00:12:55.480 "data_size": 65536 00:12:55.480 }, 00:12:55.480 { 00:12:55.480 "name": "BaseBdev2", 00:12:55.480 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:55.480 "is_configured": true, 00:12:55.480 "data_offset": 0, 00:12:55.480 "data_size": 65536 00:12:55.480 } 00:12:55.480 ] 00:12:55.480 }' 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.480 09:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.480 [2024-11-20 09:25:20.907724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.741 [2024-11-20 09:25:20.961607] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:55.741 [2024-11-20 09:25:20.961688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.741 [2024-11-20 09:25:20.961704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.741 [2024-11-20 09:25:20.961714] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.741 "name": "raid_bdev1", 00:12:55.741 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:55.741 "strip_size_kb": 0, 00:12:55.741 "state": "online", 00:12:55.741 "raid_level": "raid1", 00:12:55.741 "superblock": false, 00:12:55.741 "num_base_bdevs": 2, 00:12:55.741 "num_base_bdevs_discovered": 1, 00:12:55.741 "num_base_bdevs_operational": 1, 00:12:55.741 "base_bdevs_list": [ 00:12:55.741 { 00:12:55.741 "name": null, 00:12:55.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.741 "is_configured": false, 00:12:55.741 "data_offset": 0, 00:12:55.741 "data_size": 65536 00:12:55.741 }, 00:12:55.741 { 00:12:55.741 "name": "BaseBdev2", 00:12:55.741 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:55.741 "is_configured": true, 00:12:55.741 "data_offset": 0, 00:12:55.741 "data_size": 65536 00:12:55.741 } 00:12:55.741 ] 00:12:55.741 }' 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.741 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.000 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.260 "name": "raid_bdev1", 00:12:56.260 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:56.260 "strip_size_kb": 0, 00:12:56.260 "state": "online", 00:12:56.260 "raid_level": "raid1", 00:12:56.260 "superblock": false, 00:12:56.260 "num_base_bdevs": 2, 00:12:56.260 "num_base_bdevs_discovered": 1, 00:12:56.260 "num_base_bdevs_operational": 1, 00:12:56.260 "base_bdevs_list": [ 00:12:56.260 { 00:12:56.260 "name": null, 00:12:56.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.260 "is_configured": false, 00:12:56.260 "data_offset": 0, 00:12:56.260 "data_size": 65536 00:12:56.260 }, 00:12:56.260 { 00:12:56.260 "name": "BaseBdev2", 00:12:56.260 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:56.260 "is_configured": true, 00:12:56.260 "data_offset": 0, 00:12:56.260 "data_size": 65536 00:12:56.260 } 00:12:56.260 ] 00:12:56.260 }' 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.260 [2024-11-20 09:25:21.559806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:56.260 [2024-11-20 09:25:21.577563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.260 09:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:56.260 [2024-11-20 09:25:21.579468] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.199 "name": "raid_bdev1", 00:12:57.199 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:57.199 "strip_size_kb": 0, 00:12:57.199 "state": "online", 00:12:57.199 "raid_level": "raid1", 00:12:57.199 "superblock": false, 00:12:57.199 "num_base_bdevs": 2, 00:12:57.199 "num_base_bdevs_discovered": 2, 00:12:57.199 "num_base_bdevs_operational": 2, 00:12:57.199 "process": { 00:12:57.199 "type": "rebuild", 00:12:57.199 "target": "spare", 00:12:57.199 "progress": { 00:12:57.199 "blocks": 20480, 00:12:57.199 "percent": 31 00:12:57.199 } 00:12:57.199 }, 00:12:57.199 "base_bdevs_list": [ 00:12:57.199 { 00:12:57.199 "name": "spare", 00:12:57.199 "uuid": "a70e2353-06b7-5cd2-9abb-be4764ec51db", 00:12:57.199 "is_configured": true, 00:12:57.199 "data_offset": 0, 00:12:57.199 "data_size": 65536 00:12:57.199 }, 00:12:57.199 { 00:12:57.199 "name": "BaseBdev2", 00:12:57.199 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:57.199 "is_configured": true, 00:12:57.199 "data_offset": 0, 00:12:57.199 "data_size": 65536 00:12:57.199 } 00:12:57.199 ] 00:12:57.199 }' 00:12:57.199 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.459 "name": "raid_bdev1", 00:12:57.459 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:57.459 "strip_size_kb": 0, 00:12:57.459 "state": "online", 00:12:57.459 "raid_level": "raid1", 00:12:57.459 "superblock": false, 00:12:57.459 "num_base_bdevs": 2, 00:12:57.459 "num_base_bdevs_discovered": 2, 00:12:57.459 "num_base_bdevs_operational": 2, 00:12:57.459 "process": { 00:12:57.459 "type": "rebuild", 00:12:57.459 "target": "spare", 00:12:57.459 "progress": { 00:12:57.459 "blocks": 22528, 00:12:57.459 "percent": 34 00:12:57.459 } 00:12:57.459 }, 00:12:57.459 "base_bdevs_list": [ 00:12:57.459 { 00:12:57.459 "name": "spare", 00:12:57.459 "uuid": "a70e2353-06b7-5cd2-9abb-be4764ec51db", 00:12:57.459 "is_configured": true, 00:12:57.459 "data_offset": 0, 00:12:57.459 "data_size": 65536 00:12:57.459 }, 00:12:57.459 { 00:12:57.459 "name": "BaseBdev2", 00:12:57.459 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:57.459 "is_configured": true, 00:12:57.459 "data_offset": 0, 00:12:57.459 "data_size": 65536 00:12:57.459 } 00:12:57.459 ] 00:12:57.459 }' 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.459 09:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.854 "name": "raid_bdev1", 00:12:58.854 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:58.854 "strip_size_kb": 0, 00:12:58.854 "state": "online", 00:12:58.854 "raid_level": "raid1", 00:12:58.854 "superblock": false, 00:12:58.854 "num_base_bdevs": 2, 00:12:58.854 "num_base_bdevs_discovered": 2, 00:12:58.854 "num_base_bdevs_operational": 2, 00:12:58.854 "process": { 00:12:58.854 "type": "rebuild", 00:12:58.854 "target": "spare", 00:12:58.854 "progress": { 00:12:58.854 "blocks": 45056, 00:12:58.854 "percent": 68 00:12:58.854 } 00:12:58.854 }, 00:12:58.854 "base_bdevs_list": [ 00:12:58.854 { 00:12:58.854 "name": "spare", 00:12:58.854 "uuid": "a70e2353-06b7-5cd2-9abb-be4764ec51db", 00:12:58.854 "is_configured": true, 00:12:58.854 "data_offset": 0, 00:12:58.854 "data_size": 65536 00:12:58.854 }, 00:12:58.854 { 00:12:58.854 "name": "BaseBdev2", 00:12:58.854 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:58.854 "is_configured": true, 00:12:58.854 "data_offset": 0, 00:12:58.854 "data_size": 65536 00:12:58.854 } 00:12:58.854 ] 00:12:58.854 }' 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.854 09:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.854 09:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.854 09:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.422 [2024-11-20 09:25:24.794895] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:59.422 [2024-11-20 09:25:24.795088] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:59.422 [2024-11-20 09:25:24.795151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.681 "name": "raid_bdev1", 00:12:59.681 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:59.681 "strip_size_kb": 0, 00:12:59.681 "state": "online", 00:12:59.681 "raid_level": "raid1", 00:12:59.681 "superblock": false, 00:12:59.681 "num_base_bdevs": 2, 00:12:59.681 "num_base_bdevs_discovered": 2, 00:12:59.681 "num_base_bdevs_operational": 2, 00:12:59.681 "base_bdevs_list": [ 00:12:59.681 { 00:12:59.681 "name": "spare", 00:12:59.681 "uuid": "a70e2353-06b7-5cd2-9abb-be4764ec51db", 00:12:59.681 "is_configured": true, 00:12:59.681 "data_offset": 0, 00:12:59.681 "data_size": 65536 00:12:59.681 }, 00:12:59.681 { 00:12:59.681 "name": "BaseBdev2", 00:12:59.681 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:59.681 "is_configured": true, 00:12:59.681 "data_offset": 0, 00:12:59.681 "data_size": 65536 00:12:59.681 } 00:12:59.681 ] 00:12:59.681 }' 00:12:59.681 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.939 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.939 "name": "raid_bdev1", 00:12:59.939 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:59.939 "strip_size_kb": 0, 00:12:59.939 "state": "online", 00:12:59.939 "raid_level": "raid1", 00:12:59.939 "superblock": false, 00:12:59.939 "num_base_bdevs": 2, 00:12:59.939 "num_base_bdevs_discovered": 2, 00:12:59.939 "num_base_bdevs_operational": 2, 00:12:59.939 "base_bdevs_list": [ 00:12:59.939 { 00:12:59.939 "name": "spare", 00:12:59.939 "uuid": "a70e2353-06b7-5cd2-9abb-be4764ec51db", 00:12:59.939 "is_configured": true, 00:12:59.939 "data_offset": 0, 00:12:59.939 "data_size": 65536 00:12:59.939 }, 00:12:59.939 { 00:12:59.939 "name": "BaseBdev2", 00:12:59.939 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:59.939 "is_configured": true, 00:12:59.939 "data_offset": 0, 00:12:59.939 "data_size": 65536 00:12:59.939 } 00:12:59.939 ] 00:12:59.939 }' 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.940 "name": "raid_bdev1", 00:12:59.940 "uuid": "8633aba5-7c82-4673-98a4-f2218e70cc79", 00:12:59.940 "strip_size_kb": 0, 00:12:59.940 "state": "online", 00:12:59.940 "raid_level": "raid1", 00:12:59.940 "superblock": false, 00:12:59.940 "num_base_bdevs": 2, 00:12:59.940 "num_base_bdevs_discovered": 2, 00:12:59.940 "num_base_bdevs_operational": 2, 00:12:59.940 "base_bdevs_list": [ 00:12:59.940 { 00:12:59.940 "name": "spare", 00:12:59.940 "uuid": "a70e2353-06b7-5cd2-9abb-be4764ec51db", 00:12:59.940 "is_configured": true, 00:12:59.940 "data_offset": 0, 00:12:59.940 "data_size": 65536 00:12:59.940 }, 00:12:59.940 { 00:12:59.940 "name": "BaseBdev2", 00:12:59.940 "uuid": "a4c1e6f3-8618-566c-b5ac-e2ab2e01388c", 00:12:59.940 "is_configured": true, 00:12:59.940 "data_offset": 0, 00:12:59.940 "data_size": 65536 00:12:59.940 } 00:12:59.940 ] 00:12:59.940 }' 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.940 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.507 [2024-11-20 09:25:25.814053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.507 [2024-11-20 09:25:25.814159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.507 [2024-11-20 09:25:25.814297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.507 [2024-11-20 09:25:25.814409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.507 [2024-11-20 09:25:25.814489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:00.507 09:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:00.508 09:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.508 09:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:00.508 09:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:00.508 09:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:00.508 09:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:00.508 09:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:00.508 09:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:00.508 09:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:00.508 09:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:00.767 /dev/nbd0 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.767 1+0 records in 00:13:00.767 1+0 records out 00:13:00.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413719 s, 9.9 MB/s 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:00.767 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:01.027 /dev/nbd1 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.027 1+0 records in 00:13:01.027 1+0 records out 00:13:01.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277364 s, 14.8 MB/s 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.027 09:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:01.028 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.028 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:01.028 09:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:01.287 09:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:01.287 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:01.287 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:01.287 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:01.287 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:01.287 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.287 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.546 09:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:01.805 09:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75702 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75702 ']' 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75702 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75702 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.806 killing process with pid 75702 00:13:01.806 Received shutdown signal, test time was about 60.000000 seconds 00:13:01.806 00:13:01.806 Latency(us) 00:13:01.806 [2024-11-20T09:25:27.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.806 [2024-11-20T09:25:27.262Z] =================================================================================================================== 00:13:01.806 [2024-11-20T09:25:27.262Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75702' 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75702 00:13:01.806 [2024-11-20 09:25:27.066516] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.806 09:25:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75702 00:13:02.065 [2024-11-20 09:25:27.395885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:03.446 00:13:03.446 real 0m15.674s 00:13:03.446 user 0m17.961s 00:13:03.446 sys 0m3.065s 00:13:03.446 ************************************ 00:13:03.446 END TEST raid_rebuild_test 00:13:03.446 ************************************ 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.446 09:25:28 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:03.446 09:25:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:03.446 09:25:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.446 09:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.446 ************************************ 00:13:03.446 START TEST raid_rebuild_test_sb 00:13:03.446 ************************************ 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76120 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76120 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76120 ']' 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.446 09:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:03.446 Zero copy mechanism will not be used. 00:13:03.446 [2024-11-20 09:25:28.791070] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:03.446 [2024-11-20 09:25:28.791208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76120 ] 00:13:03.706 [2024-11-20 09:25:28.953967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.706 [2024-11-20 09:25:29.078612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.965 [2024-11-20 09:25:29.321494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.966 [2024-11-20 09:25:29.321647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 BaseBdev1_malloc 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 [2024-11-20 09:25:29.785867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:04.536 [2024-11-20 09:25:29.786032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.536 [2024-11-20 09:25:29.786099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:04.536 [2024-11-20 09:25:29.786141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.536 [2024-11-20 09:25:29.788666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.536 [2024-11-20 09:25:29.788757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.536 BaseBdev1 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 BaseBdev2_malloc 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 [2024-11-20 09:25:29.847583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:04.536 [2024-11-20 09:25:29.847660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.536 [2024-11-20 09:25:29.847685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:04.536 [2024-11-20 09:25:29.847701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.536 [2024-11-20 09:25:29.850179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.536 [2024-11-20 09:25:29.850223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.536 BaseBdev2 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 spare_malloc 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 spare_delay 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 [2024-11-20 09:25:29.936169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:04.536 [2024-11-20 09:25:29.936294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.536 [2024-11-20 09:25:29.936350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:04.536 [2024-11-20 09:25:29.936395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.536 [2024-11-20 09:25:29.938950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.536 [2024-11-20 09:25:29.939042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:04.536 spare 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 [2024-11-20 09:25:29.948234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.536 [2024-11-20 09:25:29.950355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.536 [2024-11-20 09:25:29.950588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:04.536 [2024-11-20 09:25:29.950609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.536 [2024-11-20 09:25:29.950922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:04.536 [2024-11-20 09:25:29.951133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:04.536 [2024-11-20 09:25:29.951154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:04.536 [2024-11-20 09:25:29.951343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.796 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.796 "name": "raid_bdev1", 00:13:04.796 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:04.796 "strip_size_kb": 0, 00:13:04.796 "state": "online", 00:13:04.796 "raid_level": "raid1", 00:13:04.796 "superblock": true, 00:13:04.796 "num_base_bdevs": 2, 00:13:04.796 "num_base_bdevs_discovered": 2, 00:13:04.796 "num_base_bdevs_operational": 2, 00:13:04.796 "base_bdevs_list": [ 00:13:04.796 { 00:13:04.796 "name": "BaseBdev1", 00:13:04.796 "uuid": "3cca3be9-eb5c-5008-84a2-ffd30926151b", 00:13:04.796 "is_configured": true, 00:13:04.796 "data_offset": 2048, 00:13:04.796 "data_size": 63488 00:13:04.796 }, 00:13:04.796 { 00:13:04.796 "name": "BaseBdev2", 00:13:04.796 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:04.796 "is_configured": true, 00:13:04.796 "data_offset": 2048, 00:13:04.796 "data_size": 63488 00:13:04.796 } 00:13:04.796 ] 00:13:04.796 }' 00:13:04.796 09:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.796 09:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.055 [2024-11-20 09:25:30.435814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:05.055 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.315 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:05.315 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:05.315 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:05.315 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:05.315 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:05.315 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.316 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:05.316 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.316 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:05.316 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.316 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:05.316 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.316 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.316 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:05.316 [2024-11-20 09:25:30.747019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:05.316 /dev/nbd0 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.575 1+0 records in 00:13:05.575 1+0 records out 00:13:05.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414366 s, 9.9 MB/s 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:05.575 09:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:10.880 63488+0 records in 00:13:10.880 63488+0 records out 00:13:10.880 32505856 bytes (33 MB, 31 MiB) copied, 4.5999 s, 7.1 MB/s 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:10.880 [2024-11-20 09:25:35.652285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.880 [2024-11-20 09:25:35.692331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.880 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.880 "name": "raid_bdev1", 00:13:10.880 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:10.880 "strip_size_kb": 0, 00:13:10.880 "state": "online", 00:13:10.880 "raid_level": "raid1", 00:13:10.880 "superblock": true, 00:13:10.880 "num_base_bdevs": 2, 00:13:10.880 "num_base_bdevs_discovered": 1, 00:13:10.880 "num_base_bdevs_operational": 1, 00:13:10.880 "base_bdevs_list": [ 00:13:10.880 { 00:13:10.880 "name": null, 00:13:10.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.880 "is_configured": false, 00:13:10.881 "data_offset": 0, 00:13:10.881 "data_size": 63488 00:13:10.881 }, 00:13:10.881 { 00:13:10.881 "name": "BaseBdev2", 00:13:10.881 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:10.881 "is_configured": true, 00:13:10.881 "data_offset": 2048, 00:13:10.881 "data_size": 63488 00:13:10.881 } 00:13:10.881 ] 00:13:10.881 }' 00:13:10.881 09:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.881 09:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.881 09:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:10.881 09:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.881 09:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.881 [2024-11-20 09:25:36.199583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.881 [2024-11-20 09:25:36.220442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:10.881 09:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.881 09:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:10.881 [2024-11-20 09:25:36.222648] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.819 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.079 "name": "raid_bdev1", 00:13:12.079 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:12.079 "strip_size_kb": 0, 00:13:12.079 "state": "online", 00:13:12.079 "raid_level": "raid1", 00:13:12.079 "superblock": true, 00:13:12.079 "num_base_bdevs": 2, 00:13:12.079 "num_base_bdevs_discovered": 2, 00:13:12.079 "num_base_bdevs_operational": 2, 00:13:12.079 "process": { 00:13:12.079 "type": "rebuild", 00:13:12.079 "target": "spare", 00:13:12.079 "progress": { 00:13:12.079 "blocks": 20480, 00:13:12.079 "percent": 32 00:13:12.079 } 00:13:12.079 }, 00:13:12.079 "base_bdevs_list": [ 00:13:12.079 { 00:13:12.079 "name": "spare", 00:13:12.079 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:12.079 "is_configured": true, 00:13:12.079 "data_offset": 2048, 00:13:12.079 "data_size": 63488 00:13:12.079 }, 00:13:12.079 { 00:13:12.079 "name": "BaseBdev2", 00:13:12.079 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:12.079 "is_configured": true, 00:13:12.079 "data_offset": 2048, 00:13:12.079 "data_size": 63488 00:13:12.079 } 00:13:12.079 ] 00:13:12.079 }' 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.079 [2024-11-20 09:25:37.345784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.079 [2024-11-20 09:25:37.428823] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:12.079 [2024-11-20 09:25:37.428991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.079 [2024-11-20 09:25:37.429058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.079 [2024-11-20 09:25:37.429108] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.079 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.080 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.080 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.080 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.080 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.080 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.080 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.080 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.080 "name": "raid_bdev1", 00:13:12.080 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:12.080 "strip_size_kb": 0, 00:13:12.080 "state": "online", 00:13:12.080 "raid_level": "raid1", 00:13:12.080 "superblock": true, 00:13:12.080 "num_base_bdevs": 2, 00:13:12.080 "num_base_bdevs_discovered": 1, 00:13:12.080 "num_base_bdevs_operational": 1, 00:13:12.080 "base_bdevs_list": [ 00:13:12.080 { 00:13:12.080 "name": null, 00:13:12.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.080 "is_configured": false, 00:13:12.080 "data_offset": 0, 00:13:12.080 "data_size": 63488 00:13:12.080 }, 00:13:12.080 { 00:13:12.080 "name": "BaseBdev2", 00:13:12.080 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:12.080 "is_configured": true, 00:13:12.080 "data_offset": 2048, 00:13:12.080 "data_size": 63488 00:13:12.080 } 00:13:12.080 ] 00:13:12.080 }' 00:13:12.080 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.080 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.650 "name": "raid_bdev1", 00:13:12.650 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:12.650 "strip_size_kb": 0, 00:13:12.650 "state": "online", 00:13:12.650 "raid_level": "raid1", 00:13:12.650 "superblock": true, 00:13:12.650 "num_base_bdevs": 2, 00:13:12.650 "num_base_bdevs_discovered": 1, 00:13:12.650 "num_base_bdevs_operational": 1, 00:13:12.650 "base_bdevs_list": [ 00:13:12.650 { 00:13:12.650 "name": null, 00:13:12.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.650 "is_configured": false, 00:13:12.650 "data_offset": 0, 00:13:12.650 "data_size": 63488 00:13:12.650 }, 00:13:12.650 { 00:13:12.650 "name": "BaseBdev2", 00:13:12.650 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:12.650 "is_configured": true, 00:13:12.650 "data_offset": 2048, 00:13:12.650 "data_size": 63488 00:13:12.650 } 00:13:12.650 ] 00:13:12.650 }' 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.650 09:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.650 09:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.650 09:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.650 09:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.650 09:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.650 [2024-11-20 09:25:38.033297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.650 [2024-11-20 09:25:38.053118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:12.650 09:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.650 09:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:12.650 [2024-11-20 09:25:38.055419] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.031 "name": "raid_bdev1", 00:13:14.031 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:14.031 "strip_size_kb": 0, 00:13:14.031 "state": "online", 00:13:14.031 "raid_level": "raid1", 00:13:14.031 "superblock": true, 00:13:14.031 "num_base_bdevs": 2, 00:13:14.031 "num_base_bdevs_discovered": 2, 00:13:14.031 "num_base_bdevs_operational": 2, 00:13:14.031 "process": { 00:13:14.031 "type": "rebuild", 00:13:14.031 "target": "spare", 00:13:14.031 "progress": { 00:13:14.031 "blocks": 20480, 00:13:14.031 "percent": 32 00:13:14.031 } 00:13:14.031 }, 00:13:14.031 "base_bdevs_list": [ 00:13:14.031 { 00:13:14.031 "name": "spare", 00:13:14.031 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:14.031 "is_configured": true, 00:13:14.031 "data_offset": 2048, 00:13:14.031 "data_size": 63488 00:13:14.031 }, 00:13:14.031 { 00:13:14.031 "name": "BaseBdev2", 00:13:14.031 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:14.031 "is_configured": true, 00:13:14.031 "data_offset": 2048, 00:13:14.031 "data_size": 63488 00:13:14.031 } 00:13:14.031 ] 00:13:14.031 }' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:14.031 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.031 "name": "raid_bdev1", 00:13:14.031 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:14.031 "strip_size_kb": 0, 00:13:14.031 "state": "online", 00:13:14.031 "raid_level": "raid1", 00:13:14.031 "superblock": true, 00:13:14.031 "num_base_bdevs": 2, 00:13:14.031 "num_base_bdevs_discovered": 2, 00:13:14.031 "num_base_bdevs_operational": 2, 00:13:14.031 "process": { 00:13:14.031 "type": "rebuild", 00:13:14.031 "target": "spare", 00:13:14.031 "progress": { 00:13:14.031 "blocks": 22528, 00:13:14.031 "percent": 35 00:13:14.031 } 00:13:14.031 }, 00:13:14.031 "base_bdevs_list": [ 00:13:14.031 { 00:13:14.031 "name": "spare", 00:13:14.031 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:14.031 "is_configured": true, 00:13:14.031 "data_offset": 2048, 00:13:14.031 "data_size": 63488 00:13:14.031 }, 00:13:14.031 { 00:13:14.031 "name": "BaseBdev2", 00:13:14.031 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:14.031 "is_configured": true, 00:13:14.031 "data_offset": 2048, 00:13:14.031 "data_size": 63488 00:13:14.031 } 00:13:14.031 ] 00:13:14.031 }' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.031 09:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.971 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.971 "name": "raid_bdev1", 00:13:14.971 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:14.971 "strip_size_kb": 0, 00:13:14.971 "state": "online", 00:13:14.971 "raid_level": "raid1", 00:13:14.972 "superblock": true, 00:13:14.972 "num_base_bdevs": 2, 00:13:14.972 "num_base_bdevs_discovered": 2, 00:13:14.972 "num_base_bdevs_operational": 2, 00:13:14.972 "process": { 00:13:14.972 "type": "rebuild", 00:13:14.972 "target": "spare", 00:13:14.972 "progress": { 00:13:14.972 "blocks": 45056, 00:13:14.972 "percent": 70 00:13:14.972 } 00:13:14.972 }, 00:13:14.972 "base_bdevs_list": [ 00:13:14.972 { 00:13:14.972 "name": "spare", 00:13:14.972 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:14.972 "is_configured": true, 00:13:14.972 "data_offset": 2048, 00:13:14.972 "data_size": 63488 00:13:14.972 }, 00:13:14.972 { 00:13:14.972 "name": "BaseBdev2", 00:13:14.972 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:14.972 "is_configured": true, 00:13:14.972 "data_offset": 2048, 00:13:14.972 "data_size": 63488 00:13:14.972 } 00:13:14.972 ] 00:13:14.972 }' 00:13:14.972 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.972 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.972 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.231 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.231 09:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.800 [2024-11-20 09:25:41.171143] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:15.800 [2024-11-20 09:25:41.171375] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:15.800 [2024-11-20 09:25:41.171573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.059 "name": "raid_bdev1", 00:13:16.059 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:16.059 "strip_size_kb": 0, 00:13:16.059 "state": "online", 00:13:16.059 "raid_level": "raid1", 00:13:16.059 "superblock": true, 00:13:16.059 "num_base_bdevs": 2, 00:13:16.059 "num_base_bdevs_discovered": 2, 00:13:16.059 "num_base_bdevs_operational": 2, 00:13:16.059 "base_bdevs_list": [ 00:13:16.059 { 00:13:16.059 "name": "spare", 00:13:16.059 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:16.059 "is_configured": true, 00:13:16.059 "data_offset": 2048, 00:13:16.059 "data_size": 63488 00:13:16.059 }, 00:13:16.059 { 00:13:16.059 "name": "BaseBdev2", 00:13:16.059 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:16.059 "is_configured": true, 00:13:16.059 "data_offset": 2048, 00:13:16.059 "data_size": 63488 00:13:16.059 } 00:13:16.059 ] 00:13:16.059 }' 00:13:16.059 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.319 "name": "raid_bdev1", 00:13:16.319 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:16.319 "strip_size_kb": 0, 00:13:16.319 "state": "online", 00:13:16.319 "raid_level": "raid1", 00:13:16.319 "superblock": true, 00:13:16.319 "num_base_bdevs": 2, 00:13:16.319 "num_base_bdevs_discovered": 2, 00:13:16.319 "num_base_bdevs_operational": 2, 00:13:16.319 "base_bdevs_list": [ 00:13:16.319 { 00:13:16.319 "name": "spare", 00:13:16.319 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:16.319 "is_configured": true, 00:13:16.319 "data_offset": 2048, 00:13:16.319 "data_size": 63488 00:13:16.319 }, 00:13:16.319 { 00:13:16.319 "name": "BaseBdev2", 00:13:16.319 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:16.319 "is_configured": true, 00:13:16.319 "data_offset": 2048, 00:13:16.319 "data_size": 63488 00:13:16.319 } 00:13:16.319 ] 00:13:16.319 }' 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.319 "name": "raid_bdev1", 00:13:16.319 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:16.319 "strip_size_kb": 0, 00:13:16.319 "state": "online", 00:13:16.319 "raid_level": "raid1", 00:13:16.319 "superblock": true, 00:13:16.319 "num_base_bdevs": 2, 00:13:16.319 "num_base_bdevs_discovered": 2, 00:13:16.319 "num_base_bdevs_operational": 2, 00:13:16.319 "base_bdevs_list": [ 00:13:16.319 { 00:13:16.319 "name": "spare", 00:13:16.319 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:16.319 "is_configured": true, 00:13:16.319 "data_offset": 2048, 00:13:16.319 "data_size": 63488 00:13:16.319 }, 00:13:16.319 { 00:13:16.319 "name": "BaseBdev2", 00:13:16.319 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:16.319 "is_configured": true, 00:13:16.319 "data_offset": 2048, 00:13:16.319 "data_size": 63488 00:13:16.319 } 00:13:16.319 ] 00:13:16.319 }' 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.319 09:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.887 [2024-11-20 09:25:42.117915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.887 [2024-11-20 09:25:42.117955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.887 [2024-11-20 09:25:42.118055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.887 [2024-11-20 09:25:42.118138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.887 [2024-11-20 09:25:42.118151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:16.887 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:17.146 /dev/nbd0 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.146 1+0 records in 00:13:17.146 1+0 records out 00:13:17.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338904 s, 12.1 MB/s 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:17.146 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:17.405 /dev/nbd1 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.405 1+0 records in 00:13:17.405 1+0 records out 00:13:17.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426588 s, 9.6 MB/s 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:17.405 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:17.664 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:17.664 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.664 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:17.664 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.664 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:17.664 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.664 09:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.923 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.181 [2024-11-20 09:25:43.488760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:18.181 [2024-11-20 09:25:43.488839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.181 [2024-11-20 09:25:43.488877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:18.181 [2024-11-20 09:25:43.488889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.181 [2024-11-20 09:25:43.491488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.181 [2024-11-20 09:25:43.491552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:18.181 [2024-11-20 09:25:43.491676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:18.181 [2024-11-20 09:25:43.491744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.181 [2024-11-20 09:25:43.491924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.181 spare 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.181 [2024-11-20 09:25:43.591859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:18.181 [2024-11-20 09:25:43.592028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:18.181 [2024-11-20 09:25:43.592498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:18.181 [2024-11-20 09:25:43.592755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:18.181 [2024-11-20 09:25:43.592766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:18.181 [2024-11-20 09:25:43.593012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.181 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.439 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.439 "name": "raid_bdev1", 00:13:18.439 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:18.439 "strip_size_kb": 0, 00:13:18.439 "state": "online", 00:13:18.439 "raid_level": "raid1", 00:13:18.439 "superblock": true, 00:13:18.439 "num_base_bdevs": 2, 00:13:18.439 "num_base_bdevs_discovered": 2, 00:13:18.439 "num_base_bdevs_operational": 2, 00:13:18.439 "base_bdevs_list": [ 00:13:18.439 { 00:13:18.439 "name": "spare", 00:13:18.439 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:18.439 "is_configured": true, 00:13:18.439 "data_offset": 2048, 00:13:18.439 "data_size": 63488 00:13:18.439 }, 00:13:18.439 { 00:13:18.439 "name": "BaseBdev2", 00:13:18.439 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:18.439 "is_configured": true, 00:13:18.439 "data_offset": 2048, 00:13:18.439 "data_size": 63488 00:13:18.439 } 00:13:18.439 ] 00:13:18.439 }' 00:13:18.440 09:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.440 09:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.711 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.711 "name": "raid_bdev1", 00:13:18.711 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:18.711 "strip_size_kb": 0, 00:13:18.711 "state": "online", 00:13:18.711 "raid_level": "raid1", 00:13:18.711 "superblock": true, 00:13:18.711 "num_base_bdevs": 2, 00:13:18.711 "num_base_bdevs_discovered": 2, 00:13:18.711 "num_base_bdevs_operational": 2, 00:13:18.711 "base_bdevs_list": [ 00:13:18.711 { 00:13:18.711 "name": "spare", 00:13:18.711 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:18.711 "is_configured": true, 00:13:18.712 "data_offset": 2048, 00:13:18.712 "data_size": 63488 00:13:18.712 }, 00:13:18.712 { 00:13:18.712 "name": "BaseBdev2", 00:13:18.712 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:18.712 "is_configured": true, 00:13:18.712 "data_offset": 2048, 00:13:18.712 "data_size": 63488 00:13:18.712 } 00:13:18.712 ] 00:13:18.712 }' 00:13:18.712 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.712 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.712 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.712 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.712 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.712 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.712 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.712 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:18.712 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.972 [2024-11-20 09:25:44.203979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.972 "name": "raid_bdev1", 00:13:18.972 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:18.972 "strip_size_kb": 0, 00:13:18.972 "state": "online", 00:13:18.972 "raid_level": "raid1", 00:13:18.972 "superblock": true, 00:13:18.972 "num_base_bdevs": 2, 00:13:18.972 "num_base_bdevs_discovered": 1, 00:13:18.972 "num_base_bdevs_operational": 1, 00:13:18.972 "base_bdevs_list": [ 00:13:18.972 { 00:13:18.972 "name": null, 00:13:18.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.972 "is_configured": false, 00:13:18.972 "data_offset": 0, 00:13:18.972 "data_size": 63488 00:13:18.972 }, 00:13:18.972 { 00:13:18.972 "name": "BaseBdev2", 00:13:18.972 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:18.972 "is_configured": true, 00:13:18.972 "data_offset": 2048, 00:13:18.972 "data_size": 63488 00:13:18.972 } 00:13:18.972 ] 00:13:18.972 }' 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.972 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.541 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:19.541 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.541 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.541 [2024-11-20 09:25:44.695254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.541 [2024-11-20 09:25:44.695633] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:19.541 [2024-11-20 09:25:44.695716] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:19.541 [2024-11-20 09:25:44.695793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.541 [2024-11-20 09:25:44.714700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:19.541 09:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.541 09:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:19.541 [2024-11-20 09:25:44.716949] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.477 "name": "raid_bdev1", 00:13:20.477 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:20.477 "strip_size_kb": 0, 00:13:20.477 "state": "online", 00:13:20.477 "raid_level": "raid1", 00:13:20.477 "superblock": true, 00:13:20.477 "num_base_bdevs": 2, 00:13:20.477 "num_base_bdevs_discovered": 2, 00:13:20.477 "num_base_bdevs_operational": 2, 00:13:20.477 "process": { 00:13:20.477 "type": "rebuild", 00:13:20.477 "target": "spare", 00:13:20.477 "progress": { 00:13:20.477 "blocks": 20480, 00:13:20.477 "percent": 32 00:13:20.477 } 00:13:20.477 }, 00:13:20.477 "base_bdevs_list": [ 00:13:20.477 { 00:13:20.477 "name": "spare", 00:13:20.477 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:20.477 "is_configured": true, 00:13:20.477 "data_offset": 2048, 00:13:20.477 "data_size": 63488 00:13:20.477 }, 00:13:20.477 { 00:13:20.477 "name": "BaseBdev2", 00:13:20.477 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:20.477 "is_configured": true, 00:13:20.477 "data_offset": 2048, 00:13:20.477 "data_size": 63488 00:13:20.477 } 00:13:20.477 ] 00:13:20.477 }' 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.477 09:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.477 [2024-11-20 09:25:45.868611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.477 [2024-11-20 09:25:45.922742] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.477 [2024-11-20 09:25:45.922829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.477 [2024-11-20 09:25:45.922844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.477 [2024-11-20 09:25:45.922853] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.737 09:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.737 09:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.737 "name": "raid_bdev1", 00:13:20.737 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:20.737 "strip_size_kb": 0, 00:13:20.737 "state": "online", 00:13:20.737 "raid_level": "raid1", 00:13:20.737 "superblock": true, 00:13:20.737 "num_base_bdevs": 2, 00:13:20.737 "num_base_bdevs_discovered": 1, 00:13:20.737 "num_base_bdevs_operational": 1, 00:13:20.737 "base_bdevs_list": [ 00:13:20.737 { 00:13:20.737 "name": null, 00:13:20.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.737 "is_configured": false, 00:13:20.737 "data_offset": 0, 00:13:20.737 "data_size": 63488 00:13:20.737 }, 00:13:20.737 { 00:13:20.737 "name": "BaseBdev2", 00:13:20.737 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:20.737 "is_configured": true, 00:13:20.737 "data_offset": 2048, 00:13:20.737 "data_size": 63488 00:13:20.737 } 00:13:20.737 ] 00:13:20.737 }' 00:13:20.737 09:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.737 09:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.997 09:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.997 09:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.997 09:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.997 [2024-11-20 09:25:46.395757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.997 [2024-11-20 09:25:46.395919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.997 [2024-11-20 09:25:46.395980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:20.997 [2024-11-20 09:25:46.396025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.997 [2024-11-20 09:25:46.396609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.997 [2024-11-20 09:25:46.396679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.997 [2024-11-20 09:25:46.396826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:20.997 [2024-11-20 09:25:46.396877] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:20.997 [2024-11-20 09:25:46.396928] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:20.997 [2024-11-20 09:25:46.396980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.997 [2024-11-20 09:25:46.415838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:20.997 spare 00:13:20.997 09:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.997 09:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:20.997 [2024-11-20 09:25:46.418160] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.377 "name": "raid_bdev1", 00:13:22.377 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:22.377 "strip_size_kb": 0, 00:13:22.377 "state": "online", 00:13:22.377 "raid_level": "raid1", 00:13:22.377 "superblock": true, 00:13:22.377 "num_base_bdevs": 2, 00:13:22.377 "num_base_bdevs_discovered": 2, 00:13:22.377 "num_base_bdevs_operational": 2, 00:13:22.377 "process": { 00:13:22.377 "type": "rebuild", 00:13:22.377 "target": "spare", 00:13:22.377 "progress": { 00:13:22.377 "blocks": 20480, 00:13:22.377 "percent": 32 00:13:22.377 } 00:13:22.377 }, 00:13:22.377 "base_bdevs_list": [ 00:13:22.377 { 00:13:22.377 "name": "spare", 00:13:22.377 "uuid": "c2b96746-ed67-5276-a9f0-fb92304409b0", 00:13:22.377 "is_configured": true, 00:13:22.377 "data_offset": 2048, 00:13:22.377 "data_size": 63488 00:13:22.377 }, 00:13:22.377 { 00:13:22.377 "name": "BaseBdev2", 00:13:22.377 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:22.377 "is_configured": true, 00:13:22.377 "data_offset": 2048, 00:13:22.377 "data_size": 63488 00:13:22.377 } 00:13:22.377 ] 00:13:22.377 }' 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.377 [2024-11-20 09:25:47.589235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.377 [2024-11-20 09:25:47.624197] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.377 [2024-11-20 09:25:47.624399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.377 [2024-11-20 09:25:47.624427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.377 [2024-11-20 09:25:47.624456] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.377 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.378 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.378 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.378 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.378 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.378 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.378 "name": "raid_bdev1", 00:13:22.378 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:22.378 "strip_size_kb": 0, 00:13:22.378 "state": "online", 00:13:22.378 "raid_level": "raid1", 00:13:22.378 "superblock": true, 00:13:22.378 "num_base_bdevs": 2, 00:13:22.378 "num_base_bdevs_discovered": 1, 00:13:22.378 "num_base_bdevs_operational": 1, 00:13:22.378 "base_bdevs_list": [ 00:13:22.378 { 00:13:22.378 "name": null, 00:13:22.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.378 "is_configured": false, 00:13:22.378 "data_offset": 0, 00:13:22.378 "data_size": 63488 00:13:22.378 }, 00:13:22.378 { 00:13:22.378 "name": "BaseBdev2", 00:13:22.378 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:22.378 "is_configured": true, 00:13:22.378 "data_offset": 2048, 00:13:22.378 "data_size": 63488 00:13:22.378 } 00:13:22.378 ] 00:13:22.378 }' 00:13:22.378 09:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.378 09:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.948 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.948 "name": "raid_bdev1", 00:13:22.948 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:22.948 "strip_size_kb": 0, 00:13:22.948 "state": "online", 00:13:22.948 "raid_level": "raid1", 00:13:22.948 "superblock": true, 00:13:22.948 "num_base_bdevs": 2, 00:13:22.948 "num_base_bdevs_discovered": 1, 00:13:22.948 "num_base_bdevs_operational": 1, 00:13:22.948 "base_bdevs_list": [ 00:13:22.948 { 00:13:22.948 "name": null, 00:13:22.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.948 "is_configured": false, 00:13:22.948 "data_offset": 0, 00:13:22.948 "data_size": 63488 00:13:22.948 }, 00:13:22.948 { 00:13:22.948 "name": "BaseBdev2", 00:13:22.949 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:22.949 "is_configured": true, 00:13:22.949 "data_offset": 2048, 00:13:22.949 "data_size": 63488 00:13:22.949 } 00:13:22.949 ] 00:13:22.949 }' 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.949 [2024-11-20 09:25:48.277824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:22.949 [2024-11-20 09:25:48.277950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.949 [2024-11-20 09:25:48.277981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:22.949 [2024-11-20 09:25:48.278000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.949 [2024-11-20 09:25:48.278462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.949 [2024-11-20 09:25:48.278479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.949 [2024-11-20 09:25:48.278565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:22.949 [2024-11-20 09:25:48.278580] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:22.949 [2024-11-20 09:25:48.278590] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:22.949 [2024-11-20 09:25:48.278600] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:22.949 BaseBdev1 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.949 09:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.911 "name": "raid_bdev1", 00:13:23.911 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:23.911 "strip_size_kb": 0, 00:13:23.911 "state": "online", 00:13:23.911 "raid_level": "raid1", 00:13:23.911 "superblock": true, 00:13:23.911 "num_base_bdevs": 2, 00:13:23.911 "num_base_bdevs_discovered": 1, 00:13:23.911 "num_base_bdevs_operational": 1, 00:13:23.911 "base_bdevs_list": [ 00:13:23.911 { 00:13:23.911 "name": null, 00:13:23.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.911 "is_configured": false, 00:13:23.911 "data_offset": 0, 00:13:23.911 "data_size": 63488 00:13:23.911 }, 00:13:23.911 { 00:13:23.911 "name": "BaseBdev2", 00:13:23.911 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:23.911 "is_configured": true, 00:13:23.911 "data_offset": 2048, 00:13:23.911 "data_size": 63488 00:13:23.911 } 00:13:23.911 ] 00:13:23.911 }' 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.911 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.480 "name": "raid_bdev1", 00:13:24.480 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:24.480 "strip_size_kb": 0, 00:13:24.480 "state": "online", 00:13:24.480 "raid_level": "raid1", 00:13:24.480 "superblock": true, 00:13:24.480 "num_base_bdevs": 2, 00:13:24.480 "num_base_bdevs_discovered": 1, 00:13:24.480 "num_base_bdevs_operational": 1, 00:13:24.480 "base_bdevs_list": [ 00:13:24.480 { 00:13:24.480 "name": null, 00:13:24.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.480 "is_configured": false, 00:13:24.480 "data_offset": 0, 00:13:24.480 "data_size": 63488 00:13:24.480 }, 00:13:24.480 { 00:13:24.480 "name": "BaseBdev2", 00:13:24.480 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:24.480 "is_configured": true, 00:13:24.480 "data_offset": 2048, 00:13:24.480 "data_size": 63488 00:13:24.480 } 00:13:24.480 ] 00:13:24.480 }' 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.480 [2024-11-20 09:25:49.863199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.480 [2024-11-20 09:25:49.863443] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:24.480 [2024-11-20 09:25:49.863512] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:24.480 request: 00:13:24.480 { 00:13:24.480 "base_bdev": "BaseBdev1", 00:13:24.480 "raid_bdev": "raid_bdev1", 00:13:24.480 "method": "bdev_raid_add_base_bdev", 00:13:24.480 "req_id": 1 00:13:24.480 } 00:13:24.480 Got JSON-RPC error response 00:13:24.480 response: 00:13:24.480 { 00:13:24.480 "code": -22, 00:13:24.480 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:24.480 } 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.480 09:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.860 "name": "raid_bdev1", 00:13:25.860 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:25.860 "strip_size_kb": 0, 00:13:25.860 "state": "online", 00:13:25.860 "raid_level": "raid1", 00:13:25.860 "superblock": true, 00:13:25.860 "num_base_bdevs": 2, 00:13:25.860 "num_base_bdevs_discovered": 1, 00:13:25.860 "num_base_bdevs_operational": 1, 00:13:25.860 "base_bdevs_list": [ 00:13:25.860 { 00:13:25.860 "name": null, 00:13:25.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.860 "is_configured": false, 00:13:25.860 "data_offset": 0, 00:13:25.860 "data_size": 63488 00:13:25.860 }, 00:13:25.860 { 00:13:25.860 "name": "BaseBdev2", 00:13:25.860 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:25.860 "is_configured": true, 00:13:25.860 "data_offset": 2048, 00:13:25.860 "data_size": 63488 00:13:25.860 } 00:13:25.860 ] 00:13:25.860 }' 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.860 09:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.860 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.860 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.860 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.860 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.860 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.860 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.860 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.860 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.860 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.120 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.121 "name": "raid_bdev1", 00:13:26.121 "uuid": "4e1c4390-c4cc-493a-af6c-1b2edcbaf432", 00:13:26.121 "strip_size_kb": 0, 00:13:26.121 "state": "online", 00:13:26.121 "raid_level": "raid1", 00:13:26.121 "superblock": true, 00:13:26.121 "num_base_bdevs": 2, 00:13:26.121 "num_base_bdevs_discovered": 1, 00:13:26.121 "num_base_bdevs_operational": 1, 00:13:26.121 "base_bdevs_list": [ 00:13:26.121 { 00:13:26.121 "name": null, 00:13:26.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.121 "is_configured": false, 00:13:26.121 "data_offset": 0, 00:13:26.121 "data_size": 63488 00:13:26.121 }, 00:13:26.121 { 00:13:26.121 "name": "BaseBdev2", 00:13:26.121 "uuid": "5eadf82f-a3b4-5cca-80e6-fc3a7f933897", 00:13:26.121 "is_configured": true, 00:13:26.121 "data_offset": 2048, 00:13:26.121 "data_size": 63488 00:13:26.121 } 00:13:26.121 ] 00:13:26.121 }' 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76120 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76120 ']' 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76120 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76120 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.121 killing process with pid 76120 00:13:26.121 Received shutdown signal, test time was about 60.000000 seconds 00:13:26.121 00:13:26.121 Latency(us) 00:13:26.121 [2024-11-20T09:25:51.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.121 [2024-11-20T09:25:51.577Z] =================================================================================================================== 00:13:26.121 [2024-11-20T09:25:51.577Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76120' 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76120 00:13:26.121 [2024-11-20 09:25:51.475667] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.121 09:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76120 00:13:26.121 [2024-11-20 09:25:51.475814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.121 [2024-11-20 09:25:51.475874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.121 [2024-11-20 09:25:51.475887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:26.380 [2024-11-20 09:25:51.785381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.760 09:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:27.760 00:13:27.760 real 0m24.249s 00:13:27.760 user 0m29.369s 00:13:27.760 sys 0m3.887s 00:13:27.760 ************************************ 00:13:27.760 END TEST raid_rebuild_test_sb 00:13:27.760 ************************************ 00:13:27.760 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.760 09:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.760 09:25:52 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:27.760 09:25:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:27.760 09:25:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.760 09:25:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.760 ************************************ 00:13:27.760 START TEST raid_rebuild_test_io 00:13:27.760 ************************************ 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76860 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76860 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76860 ']' 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.760 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.760 [2024-11-20 09:25:53.103315] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:27.760 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:27.760 Zero copy mechanism will not be used. 00:13:27.761 [2024-11-20 09:25:53.103555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76860 ] 00:13:28.020 [2024-11-20 09:25:53.260366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.020 [2024-11-20 09:25:53.373417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.280 [2024-11-20 09:25:53.581547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.280 [2024-11-20 09:25:53.581615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.539 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.539 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:28.539 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.539 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:28.539 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.539 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.800 BaseBdev1_malloc 00:13:28.800 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.800 09:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:28.800 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.800 09:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.800 [2024-11-20 09:25:54.004448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:28.800 [2024-11-20 09:25:54.004517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.800 [2024-11-20 09:25:54.004542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:28.800 [2024-11-20 09:25:54.004554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.800 [2024-11-20 09:25:54.006645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.800 [2024-11-20 09:25:54.006729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.800 BaseBdev1 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.800 BaseBdev2_malloc 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.800 [2024-11-20 09:25:54.060160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:28.800 [2024-11-20 09:25:54.060302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.800 [2024-11-20 09:25:54.060358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:28.800 [2024-11-20 09:25:54.060396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.800 [2024-11-20 09:25:54.062676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.800 [2024-11-20 09:25:54.062755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:28.800 BaseBdev2 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.800 spare_malloc 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.800 spare_delay 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.800 [2024-11-20 09:25:54.143447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.800 [2024-11-20 09:25:54.143507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.800 [2024-11-20 09:25:54.143537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:28.800 [2024-11-20 09:25:54.143566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.800 [2024-11-20 09:25:54.145843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.800 [2024-11-20 09:25:54.145882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.800 spare 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.800 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.800 [2024-11-20 09:25:54.155490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.800 [2024-11-20 09:25:54.157551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.800 [2024-11-20 09:25:54.157692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:28.800 [2024-11-20 09:25:54.157743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:28.801 [2024-11-20 09:25:54.158040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:28.801 [2024-11-20 09:25:54.158252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:28.801 [2024-11-20 09:25:54.158300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:28.801 [2024-11-20 09:25:54.158529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.801 "name": "raid_bdev1", 00:13:28.801 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:28.801 "strip_size_kb": 0, 00:13:28.801 "state": "online", 00:13:28.801 "raid_level": "raid1", 00:13:28.801 "superblock": false, 00:13:28.801 "num_base_bdevs": 2, 00:13:28.801 "num_base_bdevs_discovered": 2, 00:13:28.801 "num_base_bdevs_operational": 2, 00:13:28.801 "base_bdevs_list": [ 00:13:28.801 { 00:13:28.801 "name": "BaseBdev1", 00:13:28.801 "uuid": "707c24d0-eb3e-5262-8c49-40118108fd5d", 00:13:28.801 "is_configured": true, 00:13:28.801 "data_offset": 0, 00:13:28.801 "data_size": 65536 00:13:28.801 }, 00:13:28.801 { 00:13:28.801 "name": "BaseBdev2", 00:13:28.801 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:28.801 "is_configured": true, 00:13:28.801 "data_offset": 0, 00:13:28.801 "data_size": 65536 00:13:28.801 } 00:13:28.801 ] 00:13:28.801 }' 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.801 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:29.370 [2024-11-20 09:25:54.662959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.370 [2024-11-20 09:25:54.762508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.370 "name": "raid_bdev1", 00:13:29.370 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:29.370 "strip_size_kb": 0, 00:13:29.370 "state": "online", 00:13:29.370 "raid_level": "raid1", 00:13:29.370 "superblock": false, 00:13:29.370 "num_base_bdevs": 2, 00:13:29.370 "num_base_bdevs_discovered": 1, 00:13:29.370 "num_base_bdevs_operational": 1, 00:13:29.370 "base_bdevs_list": [ 00:13:29.370 { 00:13:29.370 "name": null, 00:13:29.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.370 "is_configured": false, 00:13:29.370 "data_offset": 0, 00:13:29.370 "data_size": 65536 00:13:29.370 }, 00:13:29.370 { 00:13:29.370 "name": "BaseBdev2", 00:13:29.370 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:29.370 "is_configured": true, 00:13:29.370 "data_offset": 0, 00:13:29.370 "data_size": 65536 00:13:29.370 } 00:13:29.370 ] 00:13:29.370 }' 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.370 09:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.643 [2024-11-20 09:25:54.866149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:29.643 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:29.643 Zero copy mechanism will not be used. 00:13:29.643 Running I/O for 60 seconds... 00:13:29.902 09:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:29.902 09:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.902 09:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.902 [2024-11-20 09:25:55.222628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.902 09:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.902 09:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:29.902 [2024-11-20 09:25:55.285120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:29.902 [2024-11-20 09:25:55.287276] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:30.161 [2024-11-20 09:25:55.396820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:30.161 [2024-11-20 09:25:55.397558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:30.161 [2024-11-20 09:25:55.613299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:30.161 [2024-11-20 09:25:55.613757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:30.681 159.00 IOPS, 477.00 MiB/s [2024-11-20T09:25:56.137Z] [2024-11-20 09:25:55.929439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:30.681 [2024-11-20 09:25:55.930098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:30.941 [2024-11-20 09:25:56.147106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:30.941 [2024-11-20 09:25:56.147593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.941 "name": "raid_bdev1", 00:13:30.941 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:30.941 "strip_size_kb": 0, 00:13:30.941 "state": "online", 00:13:30.941 "raid_level": "raid1", 00:13:30.941 "superblock": false, 00:13:30.941 "num_base_bdevs": 2, 00:13:30.941 "num_base_bdevs_discovered": 2, 00:13:30.941 "num_base_bdevs_operational": 2, 00:13:30.941 "process": { 00:13:30.941 "type": "rebuild", 00:13:30.941 "target": "spare", 00:13:30.941 "progress": { 00:13:30.941 "blocks": 10240, 00:13:30.941 "percent": 15 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 "base_bdevs_list": [ 00:13:30.941 { 00:13:30.941 "name": "spare", 00:13:30.941 "uuid": "72860c94-33f5-5cb9-95f0-7da22f028cb3", 00:13:30.941 "is_configured": true, 00:13:30.941 "data_offset": 0, 00:13:30.941 "data_size": 65536 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "name": "BaseBdev2", 00:13:30.941 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:30.941 "is_configured": true, 00:13:30.941 "data_offset": 0, 00:13:30.941 "data_size": 65536 00:13:30.941 } 00:13:30.941 ] 00:13:30.941 }' 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.941 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.200 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.200 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:31.200 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.200 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.200 [2024-11-20 09:25:56.425182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.200 [2024-11-20 09:25:56.494008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:31.200 [2024-11-20 09:25:56.599700] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:31.200 [2024-11-20 09:25:56.608858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.200 [2024-11-20 09:25:56.608927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.200 [2024-11-20 09:25:56.608943] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:31.459 [2024-11-20 09:25:56.652896] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.459 "name": "raid_bdev1", 00:13:31.459 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:31.459 "strip_size_kb": 0, 00:13:31.459 "state": "online", 00:13:31.459 "raid_level": "raid1", 00:13:31.459 "superblock": false, 00:13:31.459 "num_base_bdevs": 2, 00:13:31.459 "num_base_bdevs_discovered": 1, 00:13:31.459 "num_base_bdevs_operational": 1, 00:13:31.459 "base_bdevs_list": [ 00:13:31.459 { 00:13:31.459 "name": null, 00:13:31.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.459 "is_configured": false, 00:13:31.459 "data_offset": 0, 00:13:31.459 "data_size": 65536 00:13:31.459 }, 00:13:31.459 { 00:13:31.459 "name": "BaseBdev2", 00:13:31.459 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:31.459 "is_configured": true, 00:13:31.459 "data_offset": 0, 00:13:31.459 "data_size": 65536 00:13:31.459 } 00:13:31.459 ] 00:13:31.459 }' 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.459 09:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.719 129.00 IOPS, 387.00 MiB/s [2024-11-20T09:25:57.175Z] 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.719 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.719 "name": "raid_bdev1", 00:13:31.719 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:31.719 "strip_size_kb": 0, 00:13:31.719 "state": "online", 00:13:31.719 "raid_level": "raid1", 00:13:31.719 "superblock": false, 00:13:31.719 "num_base_bdevs": 2, 00:13:31.719 "num_base_bdevs_discovered": 1, 00:13:31.719 "num_base_bdevs_operational": 1, 00:13:31.719 "base_bdevs_list": [ 00:13:31.719 { 00:13:31.719 "name": null, 00:13:31.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.719 "is_configured": false, 00:13:31.719 "data_offset": 0, 00:13:31.719 "data_size": 65536 00:13:31.719 }, 00:13:31.719 { 00:13:31.719 "name": "BaseBdev2", 00:13:31.719 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:31.719 "is_configured": true, 00:13:31.719 "data_offset": 0, 00:13:31.719 "data_size": 65536 00:13:31.719 } 00:13:31.719 ] 00:13:31.719 }' 00:13:31.979 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.979 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.979 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.979 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.979 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.979 09:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.979 09:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.979 [2024-11-20 09:25:57.283050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.979 09:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.979 09:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:31.979 [2024-11-20 09:25:57.350642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:31.979 [2024-11-20 09:25:57.352827] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.238 [2024-11-20 09:25:57.461149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.238 [2024-11-20 09:25:57.461790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:32.238 [2024-11-20 09:25:57.663655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:32.238 [2024-11-20 09:25:57.664115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:32.498 137.67 IOPS, 413.00 MiB/s [2024-11-20T09:25:57.954Z] [2024-11-20 09:25:57.917191] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:32.498 [2024-11-20 09:25:57.917897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:32.757 [2024-11-20 09:25:58.033159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:32.757 [2024-11-20 09:25:58.033638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:33.016 [2024-11-20 09:25:58.273668] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:33.016 [2024-11-20 09:25:58.274290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.016 "name": "raid_bdev1", 00:13:33.016 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:33.016 "strip_size_kb": 0, 00:13:33.016 "state": "online", 00:13:33.016 "raid_level": "raid1", 00:13:33.016 "superblock": false, 00:13:33.016 "num_base_bdevs": 2, 00:13:33.016 "num_base_bdevs_discovered": 2, 00:13:33.016 "num_base_bdevs_operational": 2, 00:13:33.016 "process": { 00:13:33.016 "type": "rebuild", 00:13:33.016 "target": "spare", 00:13:33.016 "progress": { 00:13:33.016 "blocks": 14336, 00:13:33.016 "percent": 21 00:13:33.016 } 00:13:33.016 }, 00:13:33.016 "base_bdevs_list": [ 00:13:33.016 { 00:13:33.016 "name": "spare", 00:13:33.016 "uuid": "72860c94-33f5-5cb9-95f0-7da22f028cb3", 00:13:33.016 "is_configured": true, 00:13:33.016 "data_offset": 0, 00:13:33.016 "data_size": 65536 00:13:33.016 }, 00:13:33.016 { 00:13:33.016 "name": "BaseBdev2", 00:13:33.016 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:33.016 "is_configured": true, 00:13:33.016 "data_offset": 0, 00:13:33.016 "data_size": 65536 00:13:33.016 } 00:13:33.016 ] 00:13:33.016 }' 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.016 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.274 [2024-11-20 09:25:58.485018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:33.274 [2024-11-20 09:25:58.485484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=433 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.274 "name": "raid_bdev1", 00:13:33.274 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:33.274 "strip_size_kb": 0, 00:13:33.274 "state": "online", 00:13:33.274 "raid_level": "raid1", 00:13:33.274 "superblock": false, 00:13:33.274 "num_base_bdevs": 2, 00:13:33.274 "num_base_bdevs_discovered": 2, 00:13:33.274 "num_base_bdevs_operational": 2, 00:13:33.274 "process": { 00:13:33.274 "type": "rebuild", 00:13:33.274 "target": "spare", 00:13:33.274 "progress": { 00:13:33.274 "blocks": 16384, 00:13:33.274 "percent": 25 00:13:33.274 } 00:13:33.274 }, 00:13:33.274 "base_bdevs_list": [ 00:13:33.274 { 00:13:33.274 "name": "spare", 00:13:33.274 "uuid": "72860c94-33f5-5cb9-95f0-7da22f028cb3", 00:13:33.274 "is_configured": true, 00:13:33.274 "data_offset": 0, 00:13:33.274 "data_size": 65536 00:13:33.274 }, 00:13:33.274 { 00:13:33.274 "name": "BaseBdev2", 00:13:33.274 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:33.274 "is_configured": true, 00:13:33.274 "data_offset": 0, 00:13:33.274 "data_size": 65536 00:13:33.274 } 00:13:33.274 ] 00:13:33.274 }' 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.274 09:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.533 [2024-11-20 09:25:58.812232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:33.533 [2024-11-20 09:25:58.812980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:33.792 118.25 IOPS, 354.75 MiB/s [2024-11-20T09:25:59.248Z] [2024-11-20 09:25:59.028166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:33.792 [2024-11-20 09:25:59.028530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:34.051 [2024-11-20 09:25:59.362386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:34.308 [2024-11-20 09:25:59.577772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:34.308 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.308 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.308 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.308 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.308 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.308 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.308 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.309 09:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.309 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.309 09:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.309 09:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.309 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.309 "name": "raid_bdev1", 00:13:34.309 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:34.309 "strip_size_kb": 0, 00:13:34.309 "state": "online", 00:13:34.309 "raid_level": "raid1", 00:13:34.309 "superblock": false, 00:13:34.309 "num_base_bdevs": 2, 00:13:34.309 "num_base_bdevs_discovered": 2, 00:13:34.309 "num_base_bdevs_operational": 2, 00:13:34.309 "process": { 00:13:34.309 "type": "rebuild", 00:13:34.309 "target": "spare", 00:13:34.309 "progress": { 00:13:34.309 "blocks": 28672, 00:13:34.309 "percent": 43 00:13:34.309 } 00:13:34.309 }, 00:13:34.309 "base_bdevs_list": [ 00:13:34.309 { 00:13:34.309 "name": "spare", 00:13:34.309 "uuid": "72860c94-33f5-5cb9-95f0-7da22f028cb3", 00:13:34.309 "is_configured": true, 00:13:34.309 "data_offset": 0, 00:13:34.309 "data_size": 65536 00:13:34.309 }, 00:13:34.309 { 00:13:34.309 "name": "BaseBdev2", 00:13:34.309 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:34.309 "is_configured": true, 00:13:34.309 "data_offset": 0, 00:13:34.309 "data_size": 65536 00:13:34.309 } 00:13:34.309 ] 00:13:34.309 }' 00:13:34.309 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.309 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.309 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.567 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.567 09:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:34.826 102.80 IOPS, 308.40 MiB/s [2024-11-20T09:26:00.282Z] [2024-11-20 09:26:00.233123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:35.402 [2024-11-20 09:26:00.603171] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:35.402 [2024-11-20 09:26:00.718903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.402 "name": "raid_bdev1", 00:13:35.402 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:35.402 "strip_size_kb": 0, 00:13:35.402 "state": "online", 00:13:35.402 "raid_level": "raid1", 00:13:35.402 "superblock": false, 00:13:35.402 "num_base_bdevs": 2, 00:13:35.402 "num_base_bdevs_discovered": 2, 00:13:35.402 "num_base_bdevs_operational": 2, 00:13:35.402 "process": { 00:13:35.402 "type": "rebuild", 00:13:35.402 "target": "spare", 00:13:35.402 "progress": { 00:13:35.402 "blocks": 47104, 00:13:35.402 "percent": 71 00:13:35.402 } 00:13:35.402 }, 00:13:35.402 "base_bdevs_list": [ 00:13:35.402 { 00:13:35.402 "name": "spare", 00:13:35.402 "uuid": "72860c94-33f5-5cb9-95f0-7da22f028cb3", 00:13:35.402 "is_configured": true, 00:13:35.402 "data_offset": 0, 00:13:35.402 "data_size": 65536 00:13:35.402 }, 00:13:35.402 { 00:13:35.402 "name": "BaseBdev2", 00:13:35.402 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:35.402 "is_configured": true, 00:13:35.402 "data_offset": 0, 00:13:35.402 "data_size": 65536 00:13:35.402 } 00:13:35.402 ] 00:13:35.402 }' 00:13:35.402 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.661 94.00 IOPS, 282.00 MiB/s [2024-11-20T09:26:01.117Z] 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.661 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.661 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.661 09:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.661 [2024-11-20 09:26:01.077184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:36.595 [2024-11-20 09:26:01.731126] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:36.595 [2024-11-20 09:26:01.837412] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:36.595 [2024-11-20 09:26:01.840178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.595 84.86 IOPS, 254.57 MiB/s [2024-11-20T09:26:02.051Z] 09:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.595 09:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.595 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.595 "name": "raid_bdev1", 00:13:36.595 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:36.595 "strip_size_kb": 0, 00:13:36.595 "state": "online", 00:13:36.595 "raid_level": "raid1", 00:13:36.595 "superblock": false, 00:13:36.595 "num_base_bdevs": 2, 00:13:36.595 "num_base_bdevs_discovered": 2, 00:13:36.595 "num_base_bdevs_operational": 2, 00:13:36.595 "base_bdevs_list": [ 00:13:36.595 { 00:13:36.595 "name": "spare", 00:13:36.595 "uuid": "72860c94-33f5-5cb9-95f0-7da22f028cb3", 00:13:36.595 "is_configured": true, 00:13:36.595 "data_offset": 0, 00:13:36.595 "data_size": 65536 00:13:36.595 }, 00:13:36.595 { 00:13:36.595 "name": "BaseBdev2", 00:13:36.595 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:36.595 "is_configured": true, 00:13:36.595 "data_offset": 0, 00:13:36.595 "data_size": 65536 00:13:36.595 } 00:13:36.595 ] 00:13:36.595 }' 00:13:36.595 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.853 "name": "raid_bdev1", 00:13:36.853 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:36.853 "strip_size_kb": 0, 00:13:36.853 "state": "online", 00:13:36.853 "raid_level": "raid1", 00:13:36.853 "superblock": false, 00:13:36.853 "num_base_bdevs": 2, 00:13:36.853 "num_base_bdevs_discovered": 2, 00:13:36.853 "num_base_bdevs_operational": 2, 00:13:36.853 "base_bdevs_list": [ 00:13:36.853 { 00:13:36.853 "name": "spare", 00:13:36.853 "uuid": "72860c94-33f5-5cb9-95f0-7da22f028cb3", 00:13:36.853 "is_configured": true, 00:13:36.853 "data_offset": 0, 00:13:36.853 "data_size": 65536 00:13:36.853 }, 00:13:36.853 { 00:13:36.853 "name": "BaseBdev2", 00:13:36.853 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:36.853 "is_configured": true, 00:13:36.853 "data_offset": 0, 00:13:36.853 "data_size": 65536 00:13:36.853 } 00:13:36.853 ] 00:13:36.853 }' 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.853 "name": "raid_bdev1", 00:13:36.853 "uuid": "f56461a7-1e09-4c03-8db8-8245a02d05b6", 00:13:36.853 "strip_size_kb": 0, 00:13:36.853 "state": "online", 00:13:36.853 "raid_level": "raid1", 00:13:36.853 "superblock": false, 00:13:36.853 "num_base_bdevs": 2, 00:13:36.853 "num_base_bdevs_discovered": 2, 00:13:36.853 "num_base_bdevs_operational": 2, 00:13:36.853 "base_bdevs_list": [ 00:13:36.853 { 00:13:36.853 "name": "spare", 00:13:36.853 "uuid": "72860c94-33f5-5cb9-95f0-7da22f028cb3", 00:13:36.853 "is_configured": true, 00:13:36.853 "data_offset": 0, 00:13:36.853 "data_size": 65536 00:13:36.853 }, 00:13:36.853 { 00:13:36.853 "name": "BaseBdev2", 00:13:36.853 "uuid": "33688211-596c-596d-ada5-cfa5ee7fd297", 00:13:36.853 "is_configured": true, 00:13:36.853 "data_offset": 0, 00:13:36.853 "data_size": 65536 00:13:36.853 } 00:13:36.853 ] 00:13:36.853 }' 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.853 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.420 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.420 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.420 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.420 [2024-11-20 09:26:02.662080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.420 [2024-11-20 09:26:02.662197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.420 00:13:37.420 Latency(us) 00:13:37.420 [2024-11-20T09:26:02.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.420 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:37.420 raid_bdev1 : 7.88 78.04 234.12 0.00 0.00 15277.27 313.01 109436.53 00:13:37.420 [2024-11-20T09:26:02.876Z] =================================================================================================================== 00:13:37.420 [2024-11-20T09:26:02.876Z] Total : 78.04 234.12 0.00 0.00 15277.27 313.01 109436.53 00:13:37.420 { 00:13:37.420 "results": [ 00:13:37.420 { 00:13:37.420 "job": "raid_bdev1", 00:13:37.420 "core_mask": "0x1", 00:13:37.420 "workload": "randrw", 00:13:37.420 "percentage": 50, 00:13:37.420 "status": "finished", 00:13:37.420 "queue_depth": 2, 00:13:37.420 "io_size": 3145728, 00:13:37.420 "runtime": 7.880601, 00:13:37.420 "iops": 78.03973326399851, 00:13:37.420 "mibps": 234.11919979199553, 00:13:37.420 "io_failed": 0, 00:13:37.420 "io_timeout": 0, 00:13:37.420 "avg_latency_us": 15277.26711115845, 00:13:37.420 "min_latency_us": 313.0131004366812, 00:13:37.421 "max_latency_us": 109436.5344978166 00:13:37.421 } 00:13:37.421 ], 00:13:37.421 "core_count": 1 00:13:37.421 } 00:13:37.421 [2024-11-20 09:26:02.756505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.421 [2024-11-20 09:26:02.756554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.421 [2024-11-20 09:26:02.756634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.421 [2024-11-20 09:26:02.756646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.421 09:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:37.679 /dev/nbd0 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.679 1+0 records in 00:13:37.679 1+0 records out 00:13:37.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366298 s, 11.2 MB/s 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.679 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:37.970 /dev/nbd1 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.970 1+0 records in 00:13:37.970 1+0 records out 00:13:37.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281483 s, 14.6 MB/s 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:37.970 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.273 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.532 09:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:38.793 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.793 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76860 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76860 ']' 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76860 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76860 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76860' 00:13:38.794 killing process with pid 76860 00:13:38.794 Received shutdown signal, test time was about 9.249411 seconds 00:13:38.794 00:13:38.794 Latency(us) 00:13:38.794 [2024-11-20T09:26:04.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.794 [2024-11-20T09:26:04.250Z] =================================================================================================================== 00:13:38.794 [2024-11-20T09:26:04.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76860 00:13:38.794 [2024-11-20 09:26:04.100036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.794 09:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76860 00:13:39.053 [2024-11-20 09:26:04.356434] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:40.429 00:13:40.429 real 0m12.587s 00:13:40.429 user 0m15.937s 00:13:40.429 sys 0m1.494s 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.429 ************************************ 00:13:40.429 END TEST raid_rebuild_test_io 00:13:40.429 ************************************ 00:13:40.429 09:26:05 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:40.429 09:26:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:40.429 09:26:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.429 09:26:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.429 ************************************ 00:13:40.429 START TEST raid_rebuild_test_sb_io 00:13:40.429 ************************************ 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.429 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77237 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77237 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77237 ']' 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.430 09:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.430 [2024-11-20 09:26:05.755217] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:13:40.430 [2024-11-20 09:26:05.755448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77237 ] 00:13:40.430 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:40.430 Zero copy mechanism will not be used. 00:13:40.690 [2024-11-20 09:26:05.930728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.690 [2024-11-20 09:26:06.049702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.948 [2024-11-20 09:26:06.270308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.948 [2024-11-20 09:26:06.270462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.206 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.206 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:41.206 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.206 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:41.206 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.206 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.466 BaseBdev1_malloc 00:13:41.466 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.466 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:41.466 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.466 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.467 [2024-11-20 09:26:06.679124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:41.467 [2024-11-20 09:26:06.679200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.467 [2024-11-20 09:26:06.679224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:41.467 [2024-11-20 09:26:06.679237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.467 [2024-11-20 09:26:06.681525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.467 [2024-11-20 09:26:06.681563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.467 BaseBdev1 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.467 BaseBdev2_malloc 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.467 [2024-11-20 09:26:06.734713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:41.467 [2024-11-20 09:26:06.734790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.467 [2024-11-20 09:26:06.734826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:41.467 [2024-11-20 09:26:06.734839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.467 [2024-11-20 09:26:06.737008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.467 [2024-11-20 09:26:06.737108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:41.467 BaseBdev2 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.467 spare_malloc 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.467 spare_delay 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.467 [2024-11-20 09:26:06.818042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.467 [2024-11-20 09:26:06.818113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.467 [2024-11-20 09:26:06.818136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:41.467 [2024-11-20 09:26:06.818149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.467 [2024-11-20 09:26:06.820711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.467 [2024-11-20 09:26:06.820798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.467 spare 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.467 [2024-11-20 09:26:06.830084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.467 [2024-11-20 09:26:06.832079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.467 [2024-11-20 09:26:06.832263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:41.467 [2024-11-20 09:26:06.832281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.467 [2024-11-20 09:26:06.832578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:41.467 [2024-11-20 09:26:06.832770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:41.467 [2024-11-20 09:26:06.832787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:41.467 [2024-11-20 09:26:06.832950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.467 "name": "raid_bdev1", 00:13:41.467 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:41.467 "strip_size_kb": 0, 00:13:41.467 "state": "online", 00:13:41.467 "raid_level": "raid1", 00:13:41.467 "superblock": true, 00:13:41.467 "num_base_bdevs": 2, 00:13:41.467 "num_base_bdevs_discovered": 2, 00:13:41.467 "num_base_bdevs_operational": 2, 00:13:41.467 "base_bdevs_list": [ 00:13:41.467 { 00:13:41.467 "name": "BaseBdev1", 00:13:41.467 "uuid": "de17de52-f1b3-5ad6-b396-d31de2ee127a", 00:13:41.467 "is_configured": true, 00:13:41.467 "data_offset": 2048, 00:13:41.467 "data_size": 63488 00:13:41.467 }, 00:13:41.467 { 00:13:41.467 "name": "BaseBdev2", 00:13:41.467 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:41.467 "is_configured": true, 00:13:41.467 "data_offset": 2048, 00:13:41.467 "data_size": 63488 00:13:41.467 } 00:13:41.467 ] 00:13:41.467 }' 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.467 09:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.035 [2024-11-20 09:26:07.313597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.035 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.036 [2024-11-20 09:26:07.413075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.036 "name": "raid_bdev1", 00:13:42.036 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:42.036 "strip_size_kb": 0, 00:13:42.036 "state": "online", 00:13:42.036 "raid_level": "raid1", 00:13:42.036 "superblock": true, 00:13:42.036 "num_base_bdevs": 2, 00:13:42.036 "num_base_bdevs_discovered": 1, 00:13:42.036 "num_base_bdevs_operational": 1, 00:13:42.036 "base_bdevs_list": [ 00:13:42.036 { 00:13:42.036 "name": null, 00:13:42.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.036 "is_configured": false, 00:13:42.036 "data_offset": 0, 00:13:42.036 "data_size": 63488 00:13:42.036 }, 00:13:42.036 { 00:13:42.036 "name": "BaseBdev2", 00:13:42.036 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:42.036 "is_configured": true, 00:13:42.036 "data_offset": 2048, 00:13:42.036 "data_size": 63488 00:13:42.036 } 00:13:42.036 ] 00:13:42.036 }' 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.036 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.296 [2024-11-20 09:26:07.500205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:42.296 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:42.296 Zero copy mechanism will not be used. 00:13:42.296 Running I/O for 60 seconds... 00:13:42.555 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.555 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.555 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.555 [2024-11-20 09:26:07.865947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.555 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.555 09:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:42.555 [2024-11-20 09:26:07.921175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:42.555 [2024-11-20 09:26:07.923214] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.814 [2024-11-20 09:26:08.042174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.814 [2024-11-20 09:26:08.042926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.814 [2024-11-20 09:26:08.258591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.814 [2024-11-20 09:26:08.259053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.377 186.00 IOPS, 558.00 MiB/s [2024-11-20T09:26:08.833Z] [2024-11-20 09:26:08.702804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.636 "name": "raid_bdev1", 00:13:43.636 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:43.636 "strip_size_kb": 0, 00:13:43.636 "state": "online", 00:13:43.636 "raid_level": "raid1", 00:13:43.636 "superblock": true, 00:13:43.636 "num_base_bdevs": 2, 00:13:43.636 "num_base_bdevs_discovered": 2, 00:13:43.636 "num_base_bdevs_operational": 2, 00:13:43.636 "process": { 00:13:43.636 "type": "rebuild", 00:13:43.636 "target": "spare", 00:13:43.636 "progress": { 00:13:43.636 "blocks": 10240, 00:13:43.636 "percent": 16 00:13:43.636 } 00:13:43.636 }, 00:13:43.636 "base_bdevs_list": [ 00:13:43.636 { 00:13:43.636 "name": "spare", 00:13:43.636 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:43.636 "is_configured": true, 00:13:43.636 "data_offset": 2048, 00:13:43.636 "data_size": 63488 00:13:43.636 }, 00:13:43.636 { 00:13:43.636 "name": "BaseBdev2", 00:13:43.636 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:43.636 "is_configured": true, 00:13:43.636 "data_offset": 2048, 00:13:43.636 "data_size": 63488 00:13:43.636 } 00:13:43.636 ] 00:13:43.636 }' 00:13:43.636 09:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.636 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.636 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.636 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.637 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.637 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.637 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.637 [2024-11-20 09:26:09.044356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.896 [2024-11-20 09:26:09.142331] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.896 [2024-11-20 09:26:09.145212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.896 [2024-11-20 09:26:09.145261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.896 [2024-11-20 09:26:09.145273] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.896 [2024-11-20 09:26:09.188894] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.896 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.896 "name": "raid_bdev1", 00:13:43.896 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:43.896 "strip_size_kb": 0, 00:13:43.896 "state": "online", 00:13:43.896 "raid_level": "raid1", 00:13:43.896 "superblock": true, 00:13:43.896 "num_base_bdevs": 2, 00:13:43.896 "num_base_bdevs_discovered": 1, 00:13:43.896 "num_base_bdevs_operational": 1, 00:13:43.897 "base_bdevs_list": [ 00:13:43.897 { 00:13:43.897 "name": null, 00:13:43.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.897 "is_configured": false, 00:13:43.897 "data_offset": 0, 00:13:43.897 "data_size": 63488 00:13:43.897 }, 00:13:43.897 { 00:13:43.897 "name": "BaseBdev2", 00:13:43.897 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:43.897 "is_configured": true, 00:13:43.897 "data_offset": 2048, 00:13:43.897 "data_size": 63488 00:13:43.897 } 00:13:43.897 ] 00:13:43.897 }' 00:13:43.897 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.897 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.417 166.00 IOPS, 498.00 MiB/s [2024-11-20T09:26:09.873Z] 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.417 "name": "raid_bdev1", 00:13:44.417 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:44.417 "strip_size_kb": 0, 00:13:44.417 "state": "online", 00:13:44.417 "raid_level": "raid1", 00:13:44.417 "superblock": true, 00:13:44.417 "num_base_bdevs": 2, 00:13:44.417 "num_base_bdevs_discovered": 1, 00:13:44.417 "num_base_bdevs_operational": 1, 00:13:44.417 "base_bdevs_list": [ 00:13:44.417 { 00:13:44.417 "name": null, 00:13:44.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.417 "is_configured": false, 00:13:44.417 "data_offset": 0, 00:13:44.417 "data_size": 63488 00:13:44.417 }, 00:13:44.417 { 00:13:44.417 "name": "BaseBdev2", 00:13:44.417 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:44.417 "is_configured": true, 00:13:44.417 "data_offset": 2048, 00:13:44.417 "data_size": 63488 00:13:44.417 } 00:13:44.417 ] 00:13:44.417 }' 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.417 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.417 [2024-11-20 09:26:09.837624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.676 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.676 09:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:44.676 [2024-11-20 09:26:09.880778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:44.676 [2024-11-20 09:26:09.882895] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.676 [2024-11-20 09:26:09.992021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.676 [2024-11-20 09:26:09.992739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.935 [2024-11-20 09:26:10.213275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.936 [2024-11-20 09:26:10.213751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:45.195 [2024-11-20 09:26:10.456583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:45.195 166.33 IOPS, 499.00 MiB/s [2024-11-20T09:26:10.651Z] [2024-11-20 09:26:10.586792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:45.454 [2024-11-20 09:26:10.826731] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:45.454 [2024-11-20 09:26:10.827413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.454 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.714 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.714 "name": "raid_bdev1", 00:13:45.714 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:45.714 "strip_size_kb": 0, 00:13:45.714 "state": "online", 00:13:45.714 "raid_level": "raid1", 00:13:45.714 "superblock": true, 00:13:45.714 "num_base_bdevs": 2, 00:13:45.714 "num_base_bdevs_discovered": 2, 00:13:45.714 "num_base_bdevs_operational": 2, 00:13:45.714 "process": { 00:13:45.714 "type": "rebuild", 00:13:45.714 "target": "spare", 00:13:45.714 "progress": { 00:13:45.714 "blocks": 14336, 00:13:45.714 "percent": 22 00:13:45.714 } 00:13:45.714 }, 00:13:45.714 "base_bdevs_list": [ 00:13:45.714 { 00:13:45.714 "name": "spare", 00:13:45.714 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:45.714 "is_configured": true, 00:13:45.714 "data_offset": 2048, 00:13:45.714 "data_size": 63488 00:13:45.714 }, 00:13:45.714 { 00:13:45.714 "name": "BaseBdev2", 00:13:45.714 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:45.714 "is_configured": true, 00:13:45.714 "data_offset": 2048, 00:13:45.714 "data_size": 63488 00:13:45.714 } 00:13:45.714 ] 00:13:45.714 }' 00:13:45.714 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.714 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.714 09:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.714 [2024-11-20 09:26:11.031470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:45.714 [2024-11-20 09:26:11.031912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:45.714 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=446 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.714 "name": "raid_bdev1", 00:13:45.714 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:45.714 "strip_size_kb": 0, 00:13:45.714 "state": "online", 00:13:45.714 "raid_level": "raid1", 00:13:45.714 "superblock": true, 00:13:45.714 "num_base_bdevs": 2, 00:13:45.714 "num_base_bdevs_discovered": 2, 00:13:45.714 "num_base_bdevs_operational": 2, 00:13:45.714 "process": { 00:13:45.714 "type": "rebuild", 00:13:45.714 "target": "spare", 00:13:45.714 "progress": { 00:13:45.714 "blocks": 16384, 00:13:45.714 "percent": 25 00:13:45.714 } 00:13:45.714 }, 00:13:45.714 "base_bdevs_list": [ 00:13:45.714 { 00:13:45.714 "name": "spare", 00:13:45.714 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:45.714 "is_configured": true, 00:13:45.714 "data_offset": 2048, 00:13:45.714 "data_size": 63488 00:13:45.714 }, 00:13:45.714 { 00:13:45.714 "name": "BaseBdev2", 00:13:45.714 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:45.714 "is_configured": true, 00:13:45.714 "data_offset": 2048, 00:13:45.714 "data_size": 63488 00:13:45.714 } 00:13:45.714 ] 00:13:45.714 }' 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.714 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.974 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.974 09:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.974 [2024-11-20 09:26:11.283991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:46.238 147.75 IOPS, 443.25 MiB/s [2024-11-20T09:26:11.694Z] [2024-11-20 09:26:11.511155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:46.509 [2024-11-20 09:26:11.848059] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:46.509 [2024-11-20 09:26:11.848663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.770 [2024-11-20 09:26:12.188990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:46.770 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.030 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.030 "name": "raid_bdev1", 00:13:47.030 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:47.030 "strip_size_kb": 0, 00:13:47.030 "state": "online", 00:13:47.030 "raid_level": "raid1", 00:13:47.030 "superblock": true, 00:13:47.030 "num_base_bdevs": 2, 00:13:47.030 "num_base_bdevs_discovered": 2, 00:13:47.030 "num_base_bdevs_operational": 2, 00:13:47.030 "process": { 00:13:47.030 "type": "rebuild", 00:13:47.030 "target": "spare", 00:13:47.030 "progress": { 00:13:47.030 "blocks": 32768, 00:13:47.030 "percent": 51 00:13:47.030 } 00:13:47.030 }, 00:13:47.030 "base_bdevs_list": [ 00:13:47.030 { 00:13:47.030 "name": "spare", 00:13:47.030 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:47.030 "is_configured": true, 00:13:47.030 "data_offset": 2048, 00:13:47.030 "data_size": 63488 00:13:47.030 }, 00:13:47.030 { 00:13:47.030 "name": "BaseBdev2", 00:13:47.030 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:47.030 "is_configured": true, 00:13:47.030 "data_offset": 2048, 00:13:47.030 "data_size": 63488 00:13:47.030 } 00:13:47.030 ] 00:13:47.030 }' 00:13:47.030 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.030 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.030 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.030 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.030 09:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.030 [2024-11-20 09:26:12.306951] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:47.549 132.40 IOPS, 397.20 MiB/s [2024-11-20T09:26:13.005Z] [2024-11-20 09:26:12.920991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:47.549 [2024-11-20 09:26:12.921696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:47.808 [2024-11-20 09:26:13.138841] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.068 "name": "raid_bdev1", 00:13:48.068 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:48.068 "strip_size_kb": 0, 00:13:48.068 "state": "online", 00:13:48.068 "raid_level": "raid1", 00:13:48.068 "superblock": true, 00:13:48.068 "num_base_bdevs": 2, 00:13:48.068 "num_base_bdevs_discovered": 2, 00:13:48.068 "num_base_bdevs_operational": 2, 00:13:48.068 "process": { 00:13:48.068 "type": "rebuild", 00:13:48.068 "target": "spare", 00:13:48.068 "progress": { 00:13:48.068 "blocks": 49152, 00:13:48.068 "percent": 77 00:13:48.068 } 00:13:48.068 }, 00:13:48.068 "base_bdevs_list": [ 00:13:48.068 { 00:13:48.068 "name": "spare", 00:13:48.068 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:48.068 "is_configured": true, 00:13:48.068 "data_offset": 2048, 00:13:48.068 "data_size": 63488 00:13:48.068 }, 00:13:48.068 { 00:13:48.068 "name": "BaseBdev2", 00:13:48.068 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:48.068 "is_configured": true, 00:13:48.068 "data_offset": 2048, 00:13:48.068 "data_size": 63488 00:13:48.068 } 00:13:48.068 ] 00:13:48.068 }' 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.068 09:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.636 116.33 IOPS, 349.00 MiB/s [2024-11-20T09:26:14.092Z] [2024-11-20 09:26:14.015377] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:48.896 [2024-11-20 09:26:14.115228] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:48.896 [2024-11-20 09:26:14.117616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.155 105.43 IOPS, 316.29 MiB/s [2024-11-20T09:26:14.611Z] 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.155 "name": "raid_bdev1", 00:13:49.155 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:49.156 "strip_size_kb": 0, 00:13:49.156 "state": "online", 00:13:49.156 "raid_level": "raid1", 00:13:49.156 "superblock": true, 00:13:49.156 "num_base_bdevs": 2, 00:13:49.156 "num_base_bdevs_discovered": 2, 00:13:49.156 "num_base_bdevs_operational": 2, 00:13:49.156 "base_bdevs_list": [ 00:13:49.156 { 00:13:49.156 "name": "spare", 00:13:49.156 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:49.156 "is_configured": true, 00:13:49.156 "data_offset": 2048, 00:13:49.156 "data_size": 63488 00:13:49.156 }, 00:13:49.156 { 00:13:49.156 "name": "BaseBdev2", 00:13:49.156 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:49.156 "is_configured": true, 00:13:49.156 "data_offset": 2048, 00:13:49.156 "data_size": 63488 00:13:49.156 } 00:13:49.156 ] 00:13:49.156 }' 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.156 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.414 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.414 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.414 "name": "raid_bdev1", 00:13:49.414 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:49.414 "strip_size_kb": 0, 00:13:49.414 "state": "online", 00:13:49.414 "raid_level": "raid1", 00:13:49.414 "superblock": true, 00:13:49.414 "num_base_bdevs": 2, 00:13:49.415 "num_base_bdevs_discovered": 2, 00:13:49.415 "num_base_bdevs_operational": 2, 00:13:49.415 "base_bdevs_list": [ 00:13:49.415 { 00:13:49.415 "name": "spare", 00:13:49.415 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:49.415 "is_configured": true, 00:13:49.415 "data_offset": 2048, 00:13:49.415 "data_size": 63488 00:13:49.415 }, 00:13:49.415 { 00:13:49.415 "name": "BaseBdev2", 00:13:49.415 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:49.415 "is_configured": true, 00:13:49.415 "data_offset": 2048, 00:13:49.415 "data_size": 63488 00:13:49.415 } 00:13:49.415 ] 00:13:49.415 }' 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.415 "name": "raid_bdev1", 00:13:49.415 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:49.415 "strip_size_kb": 0, 00:13:49.415 "state": "online", 00:13:49.415 "raid_level": "raid1", 00:13:49.415 "superblock": true, 00:13:49.415 "num_base_bdevs": 2, 00:13:49.415 "num_base_bdevs_discovered": 2, 00:13:49.415 "num_base_bdevs_operational": 2, 00:13:49.415 "base_bdevs_list": [ 00:13:49.415 { 00:13:49.415 "name": "spare", 00:13:49.415 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:49.415 "is_configured": true, 00:13:49.415 "data_offset": 2048, 00:13:49.415 "data_size": 63488 00:13:49.415 }, 00:13:49.415 { 00:13:49.415 "name": "BaseBdev2", 00:13:49.415 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:49.415 "is_configured": true, 00:13:49.415 "data_offset": 2048, 00:13:49.415 "data_size": 63488 00:13:49.415 } 00:13:49.415 ] 00:13:49.415 }' 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.415 09:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.981 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.981 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.981 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.981 [2024-11-20 09:26:15.201395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.981 [2024-11-20 09:26:15.201432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.981 00:13:49.981 Latency(us) 00:13:49.981 [2024-11-20T09:26:15.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.981 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:49.981 raid_bdev1 : 7.81 98.54 295.63 0.00 0.00 13185.93 307.65 109436.53 00:13:49.981 [2024-11-20T09:26:15.437Z] =================================================================================================================== 00:13:49.981 [2024-11-20T09:26:15.437Z] Total : 98.54 295.63 0.00 0.00 13185.93 307.65 109436.53 00:13:49.981 [2024-11-20 09:26:15.327074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.981 { 00:13:49.981 "results": [ 00:13:49.981 { 00:13:49.981 "job": "raid_bdev1", 00:13:49.981 "core_mask": "0x1", 00:13:49.981 "workload": "randrw", 00:13:49.981 "percentage": 50, 00:13:49.981 "status": "finished", 00:13:49.981 "queue_depth": 2, 00:13:49.981 "io_size": 3145728, 00:13:49.981 "runtime": 7.813843, 00:13:49.981 "iops": 98.54306005380451, 00:13:49.981 "mibps": 295.6291801614135, 00:13:49.981 "io_failed": 0, 00:13:49.981 "io_timeout": 0, 00:13:49.981 "avg_latency_us": 13185.932957522828, 00:13:49.981 "min_latency_us": 307.6471615720524, 00:13:49.981 "max_latency_us": 109436.5344978166 00:13:49.981 } 00:13:49.981 ], 00:13:49.981 "core_count": 1 00:13:49.981 } 00:13:49.981 [2024-11-20 09:26:15.327214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.981 [2024-11-20 09:26:15.327321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.982 [2024-11-20 09:26:15.327334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.982 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:50.240 /dev/nbd0 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.241 1+0 records in 00:13:50.241 1+0 records out 00:13:50.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584489 s, 7.0 MB/s 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.241 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:50.500 /dev/nbd1 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.500 1+0 records in 00:13:50.500 1+0 records out 00:13:50.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595019 s, 6.9 MB/s 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.500 09:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:50.760 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:50.760 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.760 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:50.760 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.760 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.760 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.760 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:51.019 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.020 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.278 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.278 [2024-11-20 09:26:16.601583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:51.279 [2024-11-20 09:26:16.601694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.279 [2024-11-20 09:26:16.601739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:51.279 [2024-11-20 09:26:16.601771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.279 [2024-11-20 09:26:16.604229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.279 [2024-11-20 09:26:16.604307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:51.279 [2024-11-20 09:26:16.604474] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:51.279 [2024-11-20 09:26:16.604565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.279 [2024-11-20 09:26:16.604773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.279 spare 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.279 [2024-11-20 09:26:16.704728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:51.279 [2024-11-20 09:26:16.704829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:51.279 [2024-11-20 09:26:16.705203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:51.279 [2024-11-20 09:26:16.705400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:51.279 [2024-11-20 09:26:16.705414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:51.279 [2024-11-20 09:26:16.705644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.279 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.538 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.539 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.539 "name": "raid_bdev1", 00:13:51.539 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:51.539 "strip_size_kb": 0, 00:13:51.539 "state": "online", 00:13:51.539 "raid_level": "raid1", 00:13:51.539 "superblock": true, 00:13:51.539 "num_base_bdevs": 2, 00:13:51.539 "num_base_bdevs_discovered": 2, 00:13:51.539 "num_base_bdevs_operational": 2, 00:13:51.539 "base_bdevs_list": [ 00:13:51.539 { 00:13:51.539 "name": "spare", 00:13:51.539 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:51.539 "is_configured": true, 00:13:51.539 "data_offset": 2048, 00:13:51.539 "data_size": 63488 00:13:51.539 }, 00:13:51.539 { 00:13:51.539 "name": "BaseBdev2", 00:13:51.539 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:51.539 "is_configured": true, 00:13:51.539 "data_offset": 2048, 00:13:51.539 "data_size": 63488 00:13:51.539 } 00:13:51.539 ] 00:13:51.539 }' 00:13:51.539 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.539 09:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.798 "name": "raid_bdev1", 00:13:51.798 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:51.798 "strip_size_kb": 0, 00:13:51.798 "state": "online", 00:13:51.798 "raid_level": "raid1", 00:13:51.798 "superblock": true, 00:13:51.798 "num_base_bdevs": 2, 00:13:51.798 "num_base_bdevs_discovered": 2, 00:13:51.798 "num_base_bdevs_operational": 2, 00:13:51.798 "base_bdevs_list": [ 00:13:51.798 { 00:13:51.798 "name": "spare", 00:13:51.798 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:51.798 "is_configured": true, 00:13:51.798 "data_offset": 2048, 00:13:51.798 "data_size": 63488 00:13:51.798 }, 00:13:51.798 { 00:13:51.798 "name": "BaseBdev2", 00:13:51.798 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:51.798 "is_configured": true, 00:13:51.798 "data_offset": 2048, 00:13:51.798 "data_size": 63488 00:13:51.798 } 00:13:51.798 ] 00:13:51.798 }' 00:13:51.798 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.065 [2024-11-20 09:26:17.356570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.065 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.066 "name": "raid_bdev1", 00:13:52.066 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:52.066 "strip_size_kb": 0, 00:13:52.066 "state": "online", 00:13:52.066 "raid_level": "raid1", 00:13:52.066 "superblock": true, 00:13:52.066 "num_base_bdevs": 2, 00:13:52.066 "num_base_bdevs_discovered": 1, 00:13:52.066 "num_base_bdevs_operational": 1, 00:13:52.066 "base_bdevs_list": [ 00:13:52.066 { 00:13:52.066 "name": null, 00:13:52.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.066 "is_configured": false, 00:13:52.066 "data_offset": 0, 00:13:52.066 "data_size": 63488 00:13:52.066 }, 00:13:52.066 { 00:13:52.066 "name": "BaseBdev2", 00:13:52.066 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:52.066 "is_configured": true, 00:13:52.066 "data_offset": 2048, 00:13:52.066 "data_size": 63488 00:13:52.066 } 00:13:52.066 ] 00:13:52.066 }' 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.066 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.652 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:52.652 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.652 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.652 [2024-11-20 09:26:17.835872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.652 [2024-11-20 09:26:17.836151] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:52.652 [2024-11-20 09:26:17.836234] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:52.653 [2024-11-20 09:26:17.836306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.653 [2024-11-20 09:26:17.855399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:52.653 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.653 09:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:52.653 [2024-11-20 09:26:17.857723] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.596 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.596 "name": "raid_bdev1", 00:13:53.596 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:53.596 "strip_size_kb": 0, 00:13:53.596 "state": "online", 00:13:53.596 "raid_level": "raid1", 00:13:53.596 "superblock": true, 00:13:53.596 "num_base_bdevs": 2, 00:13:53.596 "num_base_bdevs_discovered": 2, 00:13:53.596 "num_base_bdevs_operational": 2, 00:13:53.596 "process": { 00:13:53.596 "type": "rebuild", 00:13:53.596 "target": "spare", 00:13:53.596 "progress": { 00:13:53.596 "blocks": 20480, 00:13:53.596 "percent": 32 00:13:53.596 } 00:13:53.596 }, 00:13:53.596 "base_bdevs_list": [ 00:13:53.596 { 00:13:53.597 "name": "spare", 00:13:53.597 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:53.597 "is_configured": true, 00:13:53.597 "data_offset": 2048, 00:13:53.597 "data_size": 63488 00:13:53.597 }, 00:13:53.597 { 00:13:53.597 "name": "BaseBdev2", 00:13:53.597 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:53.597 "is_configured": true, 00:13:53.597 "data_offset": 2048, 00:13:53.597 "data_size": 63488 00:13:53.597 } 00:13:53.597 ] 00:13:53.597 }' 00:13:53.597 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.597 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.597 09:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.597 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.597 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:53.597 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.597 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.597 [2024-11-20 09:26:19.025007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.855 [2024-11-20 09:26:19.064036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:53.855 [2024-11-20 09:26:19.064249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.855 [2024-11-20 09:26:19.064270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.855 [2024-11-20 09:26:19.064281] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.855 "name": "raid_bdev1", 00:13:53.855 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:53.855 "strip_size_kb": 0, 00:13:53.855 "state": "online", 00:13:53.855 "raid_level": "raid1", 00:13:53.855 "superblock": true, 00:13:53.855 "num_base_bdevs": 2, 00:13:53.855 "num_base_bdevs_discovered": 1, 00:13:53.855 "num_base_bdevs_operational": 1, 00:13:53.855 "base_bdevs_list": [ 00:13:53.855 { 00:13:53.855 "name": null, 00:13:53.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.855 "is_configured": false, 00:13:53.855 "data_offset": 0, 00:13:53.855 "data_size": 63488 00:13:53.855 }, 00:13:53.855 { 00:13:53.855 "name": "BaseBdev2", 00:13:53.855 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:53.855 "is_configured": true, 00:13:53.855 "data_offset": 2048, 00:13:53.855 "data_size": 63488 00:13:53.855 } 00:13:53.855 ] 00:13:53.855 }' 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.855 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.113 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.113 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.113 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.113 [2024-11-20 09:26:19.530351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.113 [2024-11-20 09:26:19.530518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.113 [2024-11-20 09:26:19.530563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:54.113 [2024-11-20 09:26:19.530637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.113 [2024-11-20 09:26:19.531151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.113 [2024-11-20 09:26:19.531227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.113 [2024-11-20 09:26:19.531362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:54.113 [2024-11-20 09:26:19.531409] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:54.113 [2024-11-20 09:26:19.531469] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:54.113 [2024-11-20 09:26:19.531560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.113 [2024-11-20 09:26:19.548828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:54.113 spare 00:13:54.113 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.113 [2024-11-20 09:26:19.550789] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.113 09:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.528 "name": "raid_bdev1", 00:13:55.528 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:55.528 "strip_size_kb": 0, 00:13:55.528 "state": "online", 00:13:55.528 "raid_level": "raid1", 00:13:55.528 "superblock": true, 00:13:55.528 "num_base_bdevs": 2, 00:13:55.528 "num_base_bdevs_discovered": 2, 00:13:55.528 "num_base_bdevs_operational": 2, 00:13:55.528 "process": { 00:13:55.528 "type": "rebuild", 00:13:55.528 "target": "spare", 00:13:55.528 "progress": { 00:13:55.528 "blocks": 20480, 00:13:55.528 "percent": 32 00:13:55.528 } 00:13:55.528 }, 00:13:55.528 "base_bdevs_list": [ 00:13:55.528 { 00:13:55.528 "name": "spare", 00:13:55.528 "uuid": "766c4e11-339e-579b-a624-e8acf6acc11d", 00:13:55.528 "is_configured": true, 00:13:55.528 "data_offset": 2048, 00:13:55.528 "data_size": 63488 00:13:55.528 }, 00:13:55.528 { 00:13:55.528 "name": "BaseBdev2", 00:13:55.528 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:55.528 "is_configured": true, 00:13:55.528 "data_offset": 2048, 00:13:55.528 "data_size": 63488 00:13:55.528 } 00:13:55.528 ] 00:13:55.528 }' 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.528 [2024-11-20 09:26:20.718655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.528 [2024-11-20 09:26:20.756907] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:55.528 [2024-11-20 09:26:20.756976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.528 [2024-11-20 09:26:20.756995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.528 [2024-11-20 09:26:20.757002] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.528 "name": "raid_bdev1", 00:13:55.528 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:55.528 "strip_size_kb": 0, 00:13:55.528 "state": "online", 00:13:55.528 "raid_level": "raid1", 00:13:55.528 "superblock": true, 00:13:55.528 "num_base_bdevs": 2, 00:13:55.528 "num_base_bdevs_discovered": 1, 00:13:55.528 "num_base_bdevs_operational": 1, 00:13:55.528 "base_bdevs_list": [ 00:13:55.528 { 00:13:55.528 "name": null, 00:13:55.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.528 "is_configured": false, 00:13:55.528 "data_offset": 0, 00:13:55.528 "data_size": 63488 00:13:55.528 }, 00:13:55.528 { 00:13:55.528 "name": "BaseBdev2", 00:13:55.528 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:55.528 "is_configured": true, 00:13:55.528 "data_offset": 2048, 00:13:55.528 "data_size": 63488 00:13:55.528 } 00:13:55.528 ] 00:13:55.528 }' 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.528 09:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.096 "name": "raid_bdev1", 00:13:56.096 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:56.096 "strip_size_kb": 0, 00:13:56.096 "state": "online", 00:13:56.096 "raid_level": "raid1", 00:13:56.096 "superblock": true, 00:13:56.096 "num_base_bdevs": 2, 00:13:56.096 "num_base_bdevs_discovered": 1, 00:13:56.096 "num_base_bdevs_operational": 1, 00:13:56.096 "base_bdevs_list": [ 00:13:56.096 { 00:13:56.096 "name": null, 00:13:56.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.096 "is_configured": false, 00:13:56.096 "data_offset": 0, 00:13:56.096 "data_size": 63488 00:13:56.096 }, 00:13:56.096 { 00:13:56.096 "name": "BaseBdev2", 00:13:56.096 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:56.096 "is_configured": true, 00:13:56.096 "data_offset": 2048, 00:13:56.096 "data_size": 63488 00:13:56.096 } 00:13:56.096 ] 00:13:56.096 }' 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.096 [2024-11-20 09:26:21.377432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:56.096 [2024-11-20 09:26:21.377608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.096 [2024-11-20 09:26:21.377669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:56.096 [2024-11-20 09:26:21.377681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.096 [2024-11-20 09:26:21.378200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.096 [2024-11-20 09:26:21.378220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:56.096 [2024-11-20 09:26:21.378322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:56.096 [2024-11-20 09:26:21.378338] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:56.096 [2024-11-20 09:26:21.378349] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:56.096 [2024-11-20 09:26:21.378360] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:56.096 BaseBdev1 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.096 09:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.032 "name": "raid_bdev1", 00:13:57.032 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:57.032 "strip_size_kb": 0, 00:13:57.032 "state": "online", 00:13:57.032 "raid_level": "raid1", 00:13:57.032 "superblock": true, 00:13:57.032 "num_base_bdevs": 2, 00:13:57.032 "num_base_bdevs_discovered": 1, 00:13:57.032 "num_base_bdevs_operational": 1, 00:13:57.032 "base_bdevs_list": [ 00:13:57.032 { 00:13:57.032 "name": null, 00:13:57.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.032 "is_configured": false, 00:13:57.032 "data_offset": 0, 00:13:57.032 "data_size": 63488 00:13:57.032 }, 00:13:57.032 { 00:13:57.032 "name": "BaseBdev2", 00:13:57.032 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:57.032 "is_configured": true, 00:13:57.032 "data_offset": 2048, 00:13:57.032 "data_size": 63488 00:13:57.032 } 00:13:57.032 ] 00:13:57.032 }' 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.032 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.600 "name": "raid_bdev1", 00:13:57.600 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:57.600 "strip_size_kb": 0, 00:13:57.600 "state": "online", 00:13:57.600 "raid_level": "raid1", 00:13:57.600 "superblock": true, 00:13:57.600 "num_base_bdevs": 2, 00:13:57.600 "num_base_bdevs_discovered": 1, 00:13:57.600 "num_base_bdevs_operational": 1, 00:13:57.600 "base_bdevs_list": [ 00:13:57.600 { 00:13:57.600 "name": null, 00:13:57.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.600 "is_configured": false, 00:13:57.600 "data_offset": 0, 00:13:57.600 "data_size": 63488 00:13:57.600 }, 00:13:57.600 { 00:13:57.600 "name": "BaseBdev2", 00:13:57.600 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:57.600 "is_configured": true, 00:13:57.600 "data_offset": 2048, 00:13:57.600 "data_size": 63488 00:13:57.600 } 00:13:57.600 ] 00:13:57.600 }' 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.600 09:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.600 [2024-11-20 09:26:22.995015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.600 [2024-11-20 09:26:22.995265] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:57.600 [2024-11-20 09:26:22.995336] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:57.600 request: 00:13:57.600 { 00:13:57.600 "base_bdev": "BaseBdev1", 00:13:57.600 "raid_bdev": "raid_bdev1", 00:13:57.600 "method": "bdev_raid_add_base_bdev", 00:13:57.600 "req_id": 1 00:13:57.600 } 00:13:57.600 Got JSON-RPC error response 00:13:57.600 response: 00:13:57.600 { 00:13:57.600 "code": -22, 00:13:57.600 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:57.600 } 00:13:57.600 09:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:57.600 09:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:57.600 09:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:57.600 09:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:57.600 09:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:57.600 09:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:58.981 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.981 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.982 "name": "raid_bdev1", 00:13:58.982 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:58.982 "strip_size_kb": 0, 00:13:58.982 "state": "online", 00:13:58.982 "raid_level": "raid1", 00:13:58.982 "superblock": true, 00:13:58.982 "num_base_bdevs": 2, 00:13:58.982 "num_base_bdevs_discovered": 1, 00:13:58.982 "num_base_bdevs_operational": 1, 00:13:58.982 "base_bdevs_list": [ 00:13:58.982 { 00:13:58.982 "name": null, 00:13:58.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.982 "is_configured": false, 00:13:58.982 "data_offset": 0, 00:13:58.982 "data_size": 63488 00:13:58.982 }, 00:13:58.982 { 00:13:58.982 "name": "BaseBdev2", 00:13:58.982 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:58.982 "is_configured": true, 00:13:58.982 "data_offset": 2048, 00:13:58.982 "data_size": 63488 00:13:58.982 } 00:13:58.982 ] 00:13:58.982 }' 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.982 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.242 "name": "raid_bdev1", 00:13:59.242 "uuid": "5aec40c6-b85e-46c9-83f9-b7edb7dbf8c1", 00:13:59.242 "strip_size_kb": 0, 00:13:59.242 "state": "online", 00:13:59.242 "raid_level": "raid1", 00:13:59.242 "superblock": true, 00:13:59.242 "num_base_bdevs": 2, 00:13:59.242 "num_base_bdevs_discovered": 1, 00:13:59.242 "num_base_bdevs_operational": 1, 00:13:59.242 "base_bdevs_list": [ 00:13:59.242 { 00:13:59.242 "name": null, 00:13:59.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.242 "is_configured": false, 00:13:59.242 "data_offset": 0, 00:13:59.242 "data_size": 63488 00:13:59.242 }, 00:13:59.242 { 00:13:59.242 "name": "BaseBdev2", 00:13:59.242 "uuid": "1329ba73-c80f-5dbd-8fe0-f41cf0104389", 00:13:59.242 "is_configured": true, 00:13:59.242 "data_offset": 2048, 00:13:59.242 "data_size": 63488 00:13:59.242 } 00:13:59.242 ] 00:13:59.242 }' 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77237 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77237 ']' 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77237 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.242 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77237 00:13:59.502 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.502 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.502 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77237' 00:13:59.502 killing process with pid 77237 00:13:59.502 Received shutdown signal, test time was about 17.236469 seconds 00:13:59.502 00:13:59.502 Latency(us) 00:13:59.502 [2024-11-20T09:26:24.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.502 [2024-11-20T09:26:24.958Z] =================================================================================================================== 00:13:59.502 [2024-11-20T09:26:24.958Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:59.502 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77237 00:13:59.502 [2024-11-20 09:26:24.705595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.502 [2024-11-20 09:26:24.705746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.502 09:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77237 00:13:59.502 [2024-11-20 09:26:24.705820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.502 [2024-11-20 09:26:24.705837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:59.773 [2024-11-20 09:26:24.974473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.148 09:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:01.148 00:14:01.148 real 0m20.573s 00:14:01.148 user 0m26.952s 00:14:01.148 sys 0m2.283s 00:14:01.148 09:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.148 09:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.148 ************************************ 00:14:01.148 END TEST raid_rebuild_test_sb_io 00:14:01.148 ************************************ 00:14:01.148 09:26:26 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:01.148 09:26:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:01.148 09:26:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:01.149 09:26:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.149 09:26:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.149 ************************************ 00:14:01.149 START TEST raid_rebuild_test 00:14:01.149 ************************************ 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77926 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77926 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77926 ']' 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.149 09:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.149 [2024-11-20 09:26:26.398376] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:01.149 [2024-11-20 09:26:26.398613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77926 ] 00:14:01.149 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:01.149 Zero copy mechanism will not be used. 00:14:01.149 [2024-11-20 09:26:26.576127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.407 [2024-11-20 09:26:26.704738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.666 [2024-11-20 09:26:26.945703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.666 [2024-11-20 09:26:26.945831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.924 BaseBdev1_malloc 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.924 [2024-11-20 09:26:27.309993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:01.924 [2024-11-20 09:26:27.310090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.924 [2024-11-20 09:26:27.310120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:01.924 [2024-11-20 09:26:27.310134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.924 [2024-11-20 09:26:27.312673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.924 [2024-11-20 09:26:27.312767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:01.924 BaseBdev1 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.924 BaseBdev2_malloc 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.924 [2024-11-20 09:26:27.367665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:01.924 [2024-11-20 09:26:27.367796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.924 [2024-11-20 09:26:27.367823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:01.924 [2024-11-20 09:26:27.367837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.924 [2024-11-20 09:26:27.370187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.924 [2024-11-20 09:26:27.370232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:01.924 BaseBdev2 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.924 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.183 BaseBdev3_malloc 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.183 [2024-11-20 09:26:27.438105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:02.183 [2024-11-20 09:26:27.438179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.183 [2024-11-20 09:26:27.438204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:02.183 [2024-11-20 09:26:27.438215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.183 [2024-11-20 09:26:27.440424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.183 [2024-11-20 09:26:27.440550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:02.183 BaseBdev3 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.183 BaseBdev4_malloc 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.183 [2024-11-20 09:26:27.492258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:02.183 [2024-11-20 09:26:27.492413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.183 [2024-11-20 09:26:27.492456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:02.183 [2024-11-20 09:26:27.492468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.183 [2024-11-20 09:26:27.494629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.183 [2024-11-20 09:26:27.494671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:02.183 BaseBdev4 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.183 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.184 spare_malloc 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.184 spare_delay 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.184 [2024-11-20 09:26:27.558338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.184 [2024-11-20 09:26:27.558417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.184 [2024-11-20 09:26:27.558470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:02.184 [2024-11-20 09:26:27.558481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.184 [2024-11-20 09:26:27.560582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.184 [2024-11-20 09:26:27.560622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.184 spare 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.184 [2024-11-20 09:26:27.570400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.184 [2024-11-20 09:26:27.572319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.184 [2024-11-20 09:26:27.572390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.184 [2024-11-20 09:26:27.572464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:02.184 [2024-11-20 09:26:27.572554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:02.184 [2024-11-20 09:26:27.572568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:02.184 [2024-11-20 09:26:27.572870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:02.184 [2024-11-20 09:26:27.573057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:02.184 [2024-11-20 09:26:27.573070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:02.184 [2024-11-20 09:26:27.573241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.184 "name": "raid_bdev1", 00:14:02.184 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:02.184 "strip_size_kb": 0, 00:14:02.184 "state": "online", 00:14:02.184 "raid_level": "raid1", 00:14:02.184 "superblock": false, 00:14:02.184 "num_base_bdevs": 4, 00:14:02.184 "num_base_bdevs_discovered": 4, 00:14:02.184 "num_base_bdevs_operational": 4, 00:14:02.184 "base_bdevs_list": [ 00:14:02.184 { 00:14:02.184 "name": "BaseBdev1", 00:14:02.184 "uuid": "48b3ced6-8328-5111-9520-81396db9abde", 00:14:02.184 "is_configured": true, 00:14:02.184 "data_offset": 0, 00:14:02.184 "data_size": 65536 00:14:02.184 }, 00:14:02.184 { 00:14:02.184 "name": "BaseBdev2", 00:14:02.184 "uuid": "15e37987-e786-5a2f-88e1-358d00bd5bfa", 00:14:02.184 "is_configured": true, 00:14:02.184 "data_offset": 0, 00:14:02.184 "data_size": 65536 00:14:02.184 }, 00:14:02.184 { 00:14:02.184 "name": "BaseBdev3", 00:14:02.184 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:02.184 "is_configured": true, 00:14:02.184 "data_offset": 0, 00:14:02.184 "data_size": 65536 00:14:02.184 }, 00:14:02.184 { 00:14:02.184 "name": "BaseBdev4", 00:14:02.184 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:02.184 "is_configured": true, 00:14:02.184 "data_offset": 0, 00:14:02.184 "data_size": 65536 00:14:02.184 } 00:14:02.184 ] 00:14:02.184 }' 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.184 09:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:02.754 [2024-11-20 09:26:28.049954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:02.754 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:03.013 [2024-11-20 09:26:28.321223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:03.013 /dev/nbd0 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.013 1+0 records in 00:14:03.013 1+0 records out 00:14:03.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390868 s, 10.5 MB/s 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:03.013 09:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:09.582 65536+0 records in 00:14:09.582 65536+0 records out 00:14:09.582 33554432 bytes (34 MB, 32 MiB) copied, 6.12709 s, 5.5 MB/s 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:09.582 [2024-11-20 09:26:34.742627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.582 [2024-11-20 09:26:34.784392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.582 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.582 "name": "raid_bdev1", 00:14:09.582 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:09.582 "strip_size_kb": 0, 00:14:09.582 "state": "online", 00:14:09.582 "raid_level": "raid1", 00:14:09.582 "superblock": false, 00:14:09.582 "num_base_bdevs": 4, 00:14:09.582 "num_base_bdevs_discovered": 3, 00:14:09.582 "num_base_bdevs_operational": 3, 00:14:09.582 "base_bdevs_list": [ 00:14:09.582 { 00:14:09.582 "name": null, 00:14:09.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.582 "is_configured": false, 00:14:09.582 "data_offset": 0, 00:14:09.582 "data_size": 65536 00:14:09.582 }, 00:14:09.582 { 00:14:09.582 "name": "BaseBdev2", 00:14:09.582 "uuid": "15e37987-e786-5a2f-88e1-358d00bd5bfa", 00:14:09.582 "is_configured": true, 00:14:09.582 "data_offset": 0, 00:14:09.582 "data_size": 65536 00:14:09.582 }, 00:14:09.582 { 00:14:09.582 "name": "BaseBdev3", 00:14:09.582 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:09.582 "is_configured": true, 00:14:09.583 "data_offset": 0, 00:14:09.583 "data_size": 65536 00:14:09.583 }, 00:14:09.583 { 00:14:09.583 "name": "BaseBdev4", 00:14:09.583 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:09.583 "is_configured": true, 00:14:09.583 "data_offset": 0, 00:14:09.583 "data_size": 65536 00:14:09.583 } 00:14:09.583 ] 00:14:09.583 }' 00:14:09.583 09:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.583 09:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.843 09:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.843 09:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.843 09:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.843 [2024-11-20 09:26:35.239641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.843 [2024-11-20 09:26:35.258845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:09.843 09:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.844 09:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:09.844 [2024-11-20 09:26:35.261082] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.223 "name": "raid_bdev1", 00:14:11.223 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:11.223 "strip_size_kb": 0, 00:14:11.223 "state": "online", 00:14:11.223 "raid_level": "raid1", 00:14:11.223 "superblock": false, 00:14:11.223 "num_base_bdevs": 4, 00:14:11.223 "num_base_bdevs_discovered": 4, 00:14:11.223 "num_base_bdevs_operational": 4, 00:14:11.223 "process": { 00:14:11.223 "type": "rebuild", 00:14:11.223 "target": "spare", 00:14:11.223 "progress": { 00:14:11.223 "blocks": 20480, 00:14:11.223 "percent": 31 00:14:11.223 } 00:14:11.223 }, 00:14:11.223 "base_bdevs_list": [ 00:14:11.223 { 00:14:11.223 "name": "spare", 00:14:11.223 "uuid": "eca9e8f1-caa0-56b1-ae79-9c4966031308", 00:14:11.223 "is_configured": true, 00:14:11.223 "data_offset": 0, 00:14:11.223 "data_size": 65536 00:14:11.223 }, 00:14:11.223 { 00:14:11.223 "name": "BaseBdev2", 00:14:11.223 "uuid": "15e37987-e786-5a2f-88e1-358d00bd5bfa", 00:14:11.223 "is_configured": true, 00:14:11.223 "data_offset": 0, 00:14:11.223 "data_size": 65536 00:14:11.223 }, 00:14:11.223 { 00:14:11.223 "name": "BaseBdev3", 00:14:11.223 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:11.223 "is_configured": true, 00:14:11.223 "data_offset": 0, 00:14:11.223 "data_size": 65536 00:14:11.223 }, 00:14:11.223 { 00:14:11.223 "name": "BaseBdev4", 00:14:11.223 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:11.223 "is_configured": true, 00:14:11.223 "data_offset": 0, 00:14:11.223 "data_size": 65536 00:14:11.223 } 00:14:11.223 ] 00:14:11.223 }' 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.223 [2024-11-20 09:26:36.428046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.223 [2024-11-20 09:26:36.466969] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:11.223 [2024-11-20 09:26:36.467071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.223 [2024-11-20 09:26:36.467089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.223 [2024-11-20 09:26:36.467102] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.223 "name": "raid_bdev1", 00:14:11.223 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:11.223 "strip_size_kb": 0, 00:14:11.223 "state": "online", 00:14:11.223 "raid_level": "raid1", 00:14:11.223 "superblock": false, 00:14:11.223 "num_base_bdevs": 4, 00:14:11.223 "num_base_bdevs_discovered": 3, 00:14:11.223 "num_base_bdevs_operational": 3, 00:14:11.223 "base_bdevs_list": [ 00:14:11.223 { 00:14:11.223 "name": null, 00:14:11.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.223 "is_configured": false, 00:14:11.223 "data_offset": 0, 00:14:11.223 "data_size": 65536 00:14:11.223 }, 00:14:11.223 { 00:14:11.223 "name": "BaseBdev2", 00:14:11.223 "uuid": "15e37987-e786-5a2f-88e1-358d00bd5bfa", 00:14:11.223 "is_configured": true, 00:14:11.223 "data_offset": 0, 00:14:11.223 "data_size": 65536 00:14:11.223 }, 00:14:11.223 { 00:14:11.223 "name": "BaseBdev3", 00:14:11.223 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:11.223 "is_configured": true, 00:14:11.223 "data_offset": 0, 00:14:11.223 "data_size": 65536 00:14:11.223 }, 00:14:11.223 { 00:14:11.223 "name": "BaseBdev4", 00:14:11.223 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:11.223 "is_configured": true, 00:14:11.223 "data_offset": 0, 00:14:11.223 "data_size": 65536 00:14:11.223 } 00:14:11.223 ] 00:14:11.223 }' 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.223 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.792 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.792 "name": "raid_bdev1", 00:14:11.792 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:11.792 "strip_size_kb": 0, 00:14:11.792 "state": "online", 00:14:11.792 "raid_level": "raid1", 00:14:11.792 "superblock": false, 00:14:11.792 "num_base_bdevs": 4, 00:14:11.792 "num_base_bdevs_discovered": 3, 00:14:11.792 "num_base_bdevs_operational": 3, 00:14:11.792 "base_bdevs_list": [ 00:14:11.792 { 00:14:11.792 "name": null, 00:14:11.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.792 "is_configured": false, 00:14:11.792 "data_offset": 0, 00:14:11.792 "data_size": 65536 00:14:11.792 }, 00:14:11.792 { 00:14:11.792 "name": "BaseBdev2", 00:14:11.792 "uuid": "15e37987-e786-5a2f-88e1-358d00bd5bfa", 00:14:11.792 "is_configured": true, 00:14:11.792 "data_offset": 0, 00:14:11.792 "data_size": 65536 00:14:11.792 }, 00:14:11.792 { 00:14:11.792 "name": "BaseBdev3", 00:14:11.792 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:11.792 "is_configured": true, 00:14:11.792 "data_offset": 0, 00:14:11.792 "data_size": 65536 00:14:11.793 }, 00:14:11.793 { 00:14:11.793 "name": "BaseBdev4", 00:14:11.793 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:11.793 "is_configured": true, 00:14:11.793 "data_offset": 0, 00:14:11.793 "data_size": 65536 00:14:11.793 } 00:14:11.793 ] 00:14:11.793 }' 00:14:11.793 09:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.793 09:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.793 09:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.793 09:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.793 09:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.793 09:26:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.793 09:26:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.793 [2024-11-20 09:26:37.101132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.793 [2024-11-20 09:26:37.115609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:11.793 09:26:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.793 09:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:11.793 [2024-11-20 09:26:37.117625] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.730 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.730 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.731 "name": "raid_bdev1", 00:14:12.731 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:12.731 "strip_size_kb": 0, 00:14:12.731 "state": "online", 00:14:12.731 "raid_level": "raid1", 00:14:12.731 "superblock": false, 00:14:12.731 "num_base_bdevs": 4, 00:14:12.731 "num_base_bdevs_discovered": 4, 00:14:12.731 "num_base_bdevs_operational": 4, 00:14:12.731 "process": { 00:14:12.731 "type": "rebuild", 00:14:12.731 "target": "spare", 00:14:12.731 "progress": { 00:14:12.731 "blocks": 20480, 00:14:12.731 "percent": 31 00:14:12.731 } 00:14:12.731 }, 00:14:12.731 "base_bdevs_list": [ 00:14:12.731 { 00:14:12.731 "name": "spare", 00:14:12.731 "uuid": "eca9e8f1-caa0-56b1-ae79-9c4966031308", 00:14:12.731 "is_configured": true, 00:14:12.731 "data_offset": 0, 00:14:12.731 "data_size": 65536 00:14:12.731 }, 00:14:12.731 { 00:14:12.731 "name": "BaseBdev2", 00:14:12.731 "uuid": "15e37987-e786-5a2f-88e1-358d00bd5bfa", 00:14:12.731 "is_configured": true, 00:14:12.731 "data_offset": 0, 00:14:12.731 "data_size": 65536 00:14:12.731 }, 00:14:12.731 { 00:14:12.731 "name": "BaseBdev3", 00:14:12.731 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:12.731 "is_configured": true, 00:14:12.731 "data_offset": 0, 00:14:12.731 "data_size": 65536 00:14:12.731 }, 00:14:12.731 { 00:14:12.731 "name": "BaseBdev4", 00:14:12.731 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:12.731 "is_configured": true, 00:14:12.731 "data_offset": 0, 00:14:12.731 "data_size": 65536 00:14:12.731 } 00:14:12.731 ] 00:14:12.731 }' 00:14:12.731 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.990 [2024-11-20 09:26:38.257014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.990 [2024-11-20 09:26:38.323352] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.990 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.990 "name": "raid_bdev1", 00:14:12.990 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:12.990 "strip_size_kb": 0, 00:14:12.990 "state": "online", 00:14:12.990 "raid_level": "raid1", 00:14:12.990 "superblock": false, 00:14:12.990 "num_base_bdevs": 4, 00:14:12.990 "num_base_bdevs_discovered": 3, 00:14:12.990 "num_base_bdevs_operational": 3, 00:14:12.990 "process": { 00:14:12.990 "type": "rebuild", 00:14:12.990 "target": "spare", 00:14:12.990 "progress": { 00:14:12.990 "blocks": 24576, 00:14:12.990 "percent": 37 00:14:12.990 } 00:14:12.990 }, 00:14:12.990 "base_bdevs_list": [ 00:14:12.990 { 00:14:12.990 "name": "spare", 00:14:12.990 "uuid": "eca9e8f1-caa0-56b1-ae79-9c4966031308", 00:14:12.990 "is_configured": true, 00:14:12.990 "data_offset": 0, 00:14:12.990 "data_size": 65536 00:14:12.990 }, 00:14:12.990 { 00:14:12.990 "name": null, 00:14:12.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.990 "is_configured": false, 00:14:12.990 "data_offset": 0, 00:14:12.991 "data_size": 65536 00:14:12.991 }, 00:14:12.991 { 00:14:12.991 "name": "BaseBdev3", 00:14:12.991 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:12.991 "is_configured": true, 00:14:12.991 "data_offset": 0, 00:14:12.991 "data_size": 65536 00:14:12.991 }, 00:14:12.991 { 00:14:12.991 "name": "BaseBdev4", 00:14:12.991 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:12.991 "is_configured": true, 00:14:12.991 "data_offset": 0, 00:14:12.991 "data_size": 65536 00:14:12.991 } 00:14:12.991 ] 00:14:12.991 }' 00:14:12.991 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.991 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.991 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=473 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.250 "name": "raid_bdev1", 00:14:13.250 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:13.250 "strip_size_kb": 0, 00:14:13.250 "state": "online", 00:14:13.250 "raid_level": "raid1", 00:14:13.250 "superblock": false, 00:14:13.250 "num_base_bdevs": 4, 00:14:13.250 "num_base_bdevs_discovered": 3, 00:14:13.250 "num_base_bdevs_operational": 3, 00:14:13.250 "process": { 00:14:13.250 "type": "rebuild", 00:14:13.250 "target": "spare", 00:14:13.250 "progress": { 00:14:13.250 "blocks": 26624, 00:14:13.250 "percent": 40 00:14:13.250 } 00:14:13.250 }, 00:14:13.250 "base_bdevs_list": [ 00:14:13.250 { 00:14:13.250 "name": "spare", 00:14:13.250 "uuid": "eca9e8f1-caa0-56b1-ae79-9c4966031308", 00:14:13.250 "is_configured": true, 00:14:13.250 "data_offset": 0, 00:14:13.250 "data_size": 65536 00:14:13.250 }, 00:14:13.250 { 00:14:13.250 "name": null, 00:14:13.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.250 "is_configured": false, 00:14:13.250 "data_offset": 0, 00:14:13.250 "data_size": 65536 00:14:13.250 }, 00:14:13.250 { 00:14:13.250 "name": "BaseBdev3", 00:14:13.250 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:13.250 "is_configured": true, 00:14:13.250 "data_offset": 0, 00:14:13.250 "data_size": 65536 00:14:13.250 }, 00:14:13.250 { 00:14:13.250 "name": "BaseBdev4", 00:14:13.250 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:13.250 "is_configured": true, 00:14:13.250 "data_offset": 0, 00:14:13.250 "data_size": 65536 00:14:13.250 } 00:14:13.250 ] 00:14:13.250 }' 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.250 09:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.187 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.187 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.187 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.187 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.187 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.187 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.188 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.188 09:26:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.188 09:26:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.188 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.446 09:26:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.446 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.446 "name": "raid_bdev1", 00:14:14.446 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:14.446 "strip_size_kb": 0, 00:14:14.446 "state": "online", 00:14:14.446 "raid_level": "raid1", 00:14:14.446 "superblock": false, 00:14:14.446 "num_base_bdevs": 4, 00:14:14.446 "num_base_bdevs_discovered": 3, 00:14:14.446 "num_base_bdevs_operational": 3, 00:14:14.446 "process": { 00:14:14.446 "type": "rebuild", 00:14:14.446 "target": "spare", 00:14:14.446 "progress": { 00:14:14.446 "blocks": 49152, 00:14:14.446 "percent": 75 00:14:14.446 } 00:14:14.446 }, 00:14:14.446 "base_bdevs_list": [ 00:14:14.446 { 00:14:14.446 "name": "spare", 00:14:14.446 "uuid": "eca9e8f1-caa0-56b1-ae79-9c4966031308", 00:14:14.446 "is_configured": true, 00:14:14.446 "data_offset": 0, 00:14:14.446 "data_size": 65536 00:14:14.446 }, 00:14:14.446 { 00:14:14.446 "name": null, 00:14:14.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.446 "is_configured": false, 00:14:14.446 "data_offset": 0, 00:14:14.446 "data_size": 65536 00:14:14.446 }, 00:14:14.446 { 00:14:14.446 "name": "BaseBdev3", 00:14:14.446 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:14.446 "is_configured": true, 00:14:14.446 "data_offset": 0, 00:14:14.446 "data_size": 65536 00:14:14.446 }, 00:14:14.446 { 00:14:14.446 "name": "BaseBdev4", 00:14:14.446 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:14.446 "is_configured": true, 00:14:14.446 "data_offset": 0, 00:14:14.446 "data_size": 65536 00:14:14.446 } 00:14:14.446 ] 00:14:14.446 }' 00:14:14.447 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.447 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.447 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.447 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.447 09:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.013 [2024-11-20 09:26:40.333094] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:15.013 [2024-11-20 09:26:40.333235] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:15.013 [2024-11-20 09:26:40.333307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.582 "name": "raid_bdev1", 00:14:15.582 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:15.582 "strip_size_kb": 0, 00:14:15.582 "state": "online", 00:14:15.582 "raid_level": "raid1", 00:14:15.582 "superblock": false, 00:14:15.582 "num_base_bdevs": 4, 00:14:15.582 "num_base_bdevs_discovered": 3, 00:14:15.582 "num_base_bdevs_operational": 3, 00:14:15.582 "base_bdevs_list": [ 00:14:15.582 { 00:14:15.582 "name": "spare", 00:14:15.582 "uuid": "eca9e8f1-caa0-56b1-ae79-9c4966031308", 00:14:15.582 "is_configured": true, 00:14:15.582 "data_offset": 0, 00:14:15.582 "data_size": 65536 00:14:15.582 }, 00:14:15.582 { 00:14:15.582 "name": null, 00:14:15.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.582 "is_configured": false, 00:14:15.582 "data_offset": 0, 00:14:15.582 "data_size": 65536 00:14:15.582 }, 00:14:15.582 { 00:14:15.582 "name": "BaseBdev3", 00:14:15.582 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:15.582 "is_configured": true, 00:14:15.582 "data_offset": 0, 00:14:15.582 "data_size": 65536 00:14:15.582 }, 00:14:15.582 { 00:14:15.582 "name": "BaseBdev4", 00:14:15.582 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:15.582 "is_configured": true, 00:14:15.582 "data_offset": 0, 00:14:15.582 "data_size": 65536 00:14:15.582 } 00:14:15.582 ] 00:14:15.582 }' 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.582 "name": "raid_bdev1", 00:14:15.582 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:15.582 "strip_size_kb": 0, 00:14:15.582 "state": "online", 00:14:15.582 "raid_level": "raid1", 00:14:15.582 "superblock": false, 00:14:15.582 "num_base_bdevs": 4, 00:14:15.582 "num_base_bdevs_discovered": 3, 00:14:15.582 "num_base_bdevs_operational": 3, 00:14:15.582 "base_bdevs_list": [ 00:14:15.582 { 00:14:15.582 "name": "spare", 00:14:15.582 "uuid": "eca9e8f1-caa0-56b1-ae79-9c4966031308", 00:14:15.582 "is_configured": true, 00:14:15.582 "data_offset": 0, 00:14:15.582 "data_size": 65536 00:14:15.582 }, 00:14:15.582 { 00:14:15.582 "name": null, 00:14:15.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.582 "is_configured": false, 00:14:15.582 "data_offset": 0, 00:14:15.582 "data_size": 65536 00:14:15.582 }, 00:14:15.582 { 00:14:15.582 "name": "BaseBdev3", 00:14:15.582 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:15.582 "is_configured": true, 00:14:15.582 "data_offset": 0, 00:14:15.582 "data_size": 65536 00:14:15.582 }, 00:14:15.582 { 00:14:15.582 "name": "BaseBdev4", 00:14:15.582 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:15.582 "is_configured": true, 00:14:15.582 "data_offset": 0, 00:14:15.582 "data_size": 65536 00:14:15.582 } 00:14:15.582 ] 00:14:15.582 }' 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.582 09:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.582 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.842 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.842 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.842 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.843 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.843 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.843 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.843 "name": "raid_bdev1", 00:14:15.843 "uuid": "ac1ef725-8800-4f71-9f6c-5528555cf2e1", 00:14:15.843 "strip_size_kb": 0, 00:14:15.843 "state": "online", 00:14:15.843 "raid_level": "raid1", 00:14:15.843 "superblock": false, 00:14:15.843 "num_base_bdevs": 4, 00:14:15.843 "num_base_bdevs_discovered": 3, 00:14:15.843 "num_base_bdevs_operational": 3, 00:14:15.843 "base_bdevs_list": [ 00:14:15.843 { 00:14:15.843 "name": "spare", 00:14:15.843 "uuid": "eca9e8f1-caa0-56b1-ae79-9c4966031308", 00:14:15.843 "is_configured": true, 00:14:15.843 "data_offset": 0, 00:14:15.843 "data_size": 65536 00:14:15.843 }, 00:14:15.843 { 00:14:15.843 "name": null, 00:14:15.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.843 "is_configured": false, 00:14:15.843 "data_offset": 0, 00:14:15.843 "data_size": 65536 00:14:15.843 }, 00:14:15.843 { 00:14:15.843 "name": "BaseBdev3", 00:14:15.843 "uuid": "e7f8c808-3179-5b8e-98e4-2b5778d18ca1", 00:14:15.843 "is_configured": true, 00:14:15.843 "data_offset": 0, 00:14:15.843 "data_size": 65536 00:14:15.843 }, 00:14:15.843 { 00:14:15.843 "name": "BaseBdev4", 00:14:15.843 "uuid": "2cf74e31-15c9-5226-a704-28e5f18fa258", 00:14:15.843 "is_configured": true, 00:14:15.843 "data_offset": 0, 00:14:15.843 "data_size": 65536 00:14:15.843 } 00:14:15.843 ] 00:14:15.843 }' 00:14:15.843 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.843 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 [2024-11-20 09:26:41.471572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.102 [2024-11-20 09:26:41.471656] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.102 [2024-11-20 09:26:41.471775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.102 [2024-11-20 09:26:41.471905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.102 [2024-11-20 09:26:41.471959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:16.102 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:16.362 /dev/nbd0 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.362 1+0 records in 00:14:16.362 1+0 records out 00:14:16.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328955 s, 12.5 MB/s 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:16.362 09:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:16.622 /dev/nbd1 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.622 1+0 records in 00:14:16.622 1+0 records out 00:14:16.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004221 s, 9.7 MB/s 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:16.622 09:26:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:16.882 09:26:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:16.882 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.882 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:16.882 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.882 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:16.882 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.882 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.141 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77926 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77926 ']' 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77926 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77926 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.401 killing process with pid 77926 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77926' 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77926 00:14:17.401 Received shutdown signal, test time was about 60.000000 seconds 00:14:17.401 00:14:17.401 Latency(us) 00:14:17.401 [2024-11-20T09:26:42.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.401 [2024-11-20T09:26:42.857Z] =================================================================================================================== 00:14:17.401 [2024-11-20T09:26:42.857Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:17.401 [2024-11-20 09:26:42.798305] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.401 09:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77926 00:14:17.971 [2024-11-20 09:26:43.331605] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:19.352 00:14:19.352 real 0m18.254s 00:14:19.352 user 0m20.460s 00:14:19.352 sys 0m3.296s 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.352 ************************************ 00:14:19.352 END TEST raid_rebuild_test 00:14:19.352 ************************************ 00:14:19.352 09:26:44 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:19.352 09:26:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:19.352 09:26:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.352 09:26:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.352 ************************************ 00:14:19.352 START TEST raid_rebuild_test_sb 00:14:19.352 ************************************ 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.352 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78378 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78378 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78378 ']' 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.353 09:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.353 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:19.353 Zero copy mechanism will not be used. 00:14:19.353 [2024-11-20 09:26:44.720791] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:19.353 [2024-11-20 09:26:44.720941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78378 ] 00:14:19.612 [2024-11-20 09:26:44.893092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.612 [2024-11-20 09:26:45.014183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.872 [2024-11-20 09:26:45.223845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.872 [2024-11-20 09:26:45.223893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.441 BaseBdev1_malloc 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.441 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.441 [2024-11-20 09:26:45.636647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:20.441 [2024-11-20 09:26:45.636722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.441 [2024-11-20 09:26:45.636744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:20.441 [2024-11-20 09:26:45.636756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.441 [2024-11-20 09:26:45.639029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.441 [2024-11-20 09:26:45.639074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:20.442 BaseBdev1 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.442 BaseBdev2_malloc 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.442 [2024-11-20 09:26:45.694038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:20.442 [2024-11-20 09:26:45.694106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.442 [2024-11-20 09:26:45.694126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:20.442 [2024-11-20 09:26:45.694141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.442 [2024-11-20 09:26:45.696544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.442 [2024-11-20 09:26:45.696588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:20.442 BaseBdev2 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.442 BaseBdev3_malloc 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.442 [2024-11-20 09:26:45.762380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:20.442 [2024-11-20 09:26:45.762467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.442 [2024-11-20 09:26:45.762491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:20.442 [2024-11-20 09:26:45.762504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.442 [2024-11-20 09:26:45.764774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.442 [2024-11-20 09:26:45.764815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:20.442 BaseBdev3 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.442 BaseBdev4_malloc 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.442 [2024-11-20 09:26:45.818041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:20.442 [2024-11-20 09:26:45.818118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.442 [2024-11-20 09:26:45.818142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:20.442 [2024-11-20 09:26:45.818154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.442 [2024-11-20 09:26:45.820491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.442 [2024-11-20 09:26:45.820533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:20.442 BaseBdev4 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.442 spare_malloc 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.442 spare_delay 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.442 [2024-11-20 09:26:45.884788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.442 [2024-11-20 09:26:45.884850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.442 [2024-11-20 09:26:45.884872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:20.442 [2024-11-20 09:26:45.884883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.442 [2024-11-20 09:26:45.887068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.442 [2024-11-20 09:26:45.887106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.442 spare 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.442 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.702 [2024-11-20 09:26:45.896820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.702 [2024-11-20 09:26:45.898753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.702 [2024-11-20 09:26:45.898825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.702 [2024-11-20 09:26:45.898878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:20.702 [2024-11-20 09:26:45.899064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:20.702 [2024-11-20 09:26:45.899091] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.702 [2024-11-20 09:26:45.899374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:20.702 [2024-11-20 09:26:45.899619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:20.702 [2024-11-20 09:26:45.899646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:20.702 [2024-11-20 09:26:45.899822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.702 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.703 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.703 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.703 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.703 "name": "raid_bdev1", 00:14:20.703 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:20.703 "strip_size_kb": 0, 00:14:20.703 "state": "online", 00:14:20.703 "raid_level": "raid1", 00:14:20.703 "superblock": true, 00:14:20.703 "num_base_bdevs": 4, 00:14:20.703 "num_base_bdevs_discovered": 4, 00:14:20.703 "num_base_bdevs_operational": 4, 00:14:20.703 "base_bdevs_list": [ 00:14:20.703 { 00:14:20.703 "name": "BaseBdev1", 00:14:20.703 "uuid": "c0a1b8ca-0c4c-5636-8b98-6d8d0fddc694", 00:14:20.703 "is_configured": true, 00:14:20.703 "data_offset": 2048, 00:14:20.703 "data_size": 63488 00:14:20.703 }, 00:14:20.703 { 00:14:20.703 "name": "BaseBdev2", 00:14:20.703 "uuid": "23de5546-1dc6-5efb-a41d-88eb27514d9a", 00:14:20.703 "is_configured": true, 00:14:20.703 "data_offset": 2048, 00:14:20.703 "data_size": 63488 00:14:20.703 }, 00:14:20.703 { 00:14:20.703 "name": "BaseBdev3", 00:14:20.703 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:20.703 "is_configured": true, 00:14:20.703 "data_offset": 2048, 00:14:20.703 "data_size": 63488 00:14:20.703 }, 00:14:20.703 { 00:14:20.703 "name": "BaseBdev4", 00:14:20.703 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:20.703 "is_configured": true, 00:14:20.703 "data_offset": 2048, 00:14:20.703 "data_size": 63488 00:14:20.703 } 00:14:20.703 ] 00:14:20.703 }' 00:14:20.703 09:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.703 09:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.962 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:20.962 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.962 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.962 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.962 [2024-11-20 09:26:46.340481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.962 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.962 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:20.962 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:20.962 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.963 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:21.223 [2024-11-20 09:26:46.627685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:21.223 /dev/nbd0 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.223 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.483 1+0 records in 00:14:21.483 1+0 records out 00:14:21.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406997 s, 10.1 MB/s 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:21.483 09:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:28.132 63488+0 records in 00:14:28.132 63488+0 records out 00:14:28.132 32505856 bytes (33 MB, 31 MiB) copied, 5.86432 s, 5.5 MB/s 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:28.132 [2024-11-20 09:26:52.794052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.132 [2024-11-20 09:26:52.834614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.132 "name": "raid_bdev1", 00:14:28.132 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:28.132 "strip_size_kb": 0, 00:14:28.132 "state": "online", 00:14:28.132 "raid_level": "raid1", 00:14:28.132 "superblock": true, 00:14:28.132 "num_base_bdevs": 4, 00:14:28.132 "num_base_bdevs_discovered": 3, 00:14:28.132 "num_base_bdevs_operational": 3, 00:14:28.132 "base_bdevs_list": [ 00:14:28.132 { 00:14:28.132 "name": null, 00:14:28.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.132 "is_configured": false, 00:14:28.132 "data_offset": 0, 00:14:28.132 "data_size": 63488 00:14:28.132 }, 00:14:28.132 { 00:14:28.132 "name": "BaseBdev2", 00:14:28.132 "uuid": "23de5546-1dc6-5efb-a41d-88eb27514d9a", 00:14:28.132 "is_configured": true, 00:14:28.132 "data_offset": 2048, 00:14:28.132 "data_size": 63488 00:14:28.132 }, 00:14:28.132 { 00:14:28.132 "name": "BaseBdev3", 00:14:28.132 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:28.132 "is_configured": true, 00:14:28.132 "data_offset": 2048, 00:14:28.132 "data_size": 63488 00:14:28.132 }, 00:14:28.132 { 00:14:28.132 "name": "BaseBdev4", 00:14:28.132 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:28.132 "is_configured": true, 00:14:28.132 "data_offset": 2048, 00:14:28.132 "data_size": 63488 00:14:28.132 } 00:14:28.132 ] 00:14:28.132 }' 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.132 09:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.132 09:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.132 09:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.132 09:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.132 [2024-11-20 09:26:53.313826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.132 [2024-11-20 09:26:53.329910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:28.132 09:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.132 09:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:28.132 [2024-11-20 09:26:53.332059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.072 "name": "raid_bdev1", 00:14:29.072 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:29.072 "strip_size_kb": 0, 00:14:29.072 "state": "online", 00:14:29.072 "raid_level": "raid1", 00:14:29.072 "superblock": true, 00:14:29.072 "num_base_bdevs": 4, 00:14:29.072 "num_base_bdevs_discovered": 4, 00:14:29.072 "num_base_bdevs_operational": 4, 00:14:29.072 "process": { 00:14:29.072 "type": "rebuild", 00:14:29.072 "target": "spare", 00:14:29.072 "progress": { 00:14:29.072 "blocks": 20480, 00:14:29.072 "percent": 32 00:14:29.072 } 00:14:29.072 }, 00:14:29.072 "base_bdevs_list": [ 00:14:29.072 { 00:14:29.072 "name": "spare", 00:14:29.072 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:29.072 "is_configured": true, 00:14:29.072 "data_offset": 2048, 00:14:29.072 "data_size": 63488 00:14:29.072 }, 00:14:29.072 { 00:14:29.072 "name": "BaseBdev2", 00:14:29.072 "uuid": "23de5546-1dc6-5efb-a41d-88eb27514d9a", 00:14:29.072 "is_configured": true, 00:14:29.072 "data_offset": 2048, 00:14:29.072 "data_size": 63488 00:14:29.072 }, 00:14:29.072 { 00:14:29.072 "name": "BaseBdev3", 00:14:29.072 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:29.072 "is_configured": true, 00:14:29.072 "data_offset": 2048, 00:14:29.072 "data_size": 63488 00:14:29.072 }, 00:14:29.072 { 00:14:29.072 "name": "BaseBdev4", 00:14:29.072 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:29.072 "is_configured": true, 00:14:29.072 "data_offset": 2048, 00:14:29.072 "data_size": 63488 00:14:29.072 } 00:14:29.072 ] 00:14:29.072 }' 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.072 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.072 [2024-11-20 09:26:54.483603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.332 [2024-11-20 09:26:54.537996] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.332 [2024-11-20 09:26:54.538096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.332 [2024-11-20 09:26:54.538114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.332 [2024-11-20 09:26:54.538125] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.332 "name": "raid_bdev1", 00:14:29.332 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:29.332 "strip_size_kb": 0, 00:14:29.332 "state": "online", 00:14:29.332 "raid_level": "raid1", 00:14:29.332 "superblock": true, 00:14:29.332 "num_base_bdevs": 4, 00:14:29.332 "num_base_bdevs_discovered": 3, 00:14:29.332 "num_base_bdevs_operational": 3, 00:14:29.332 "base_bdevs_list": [ 00:14:29.332 { 00:14:29.332 "name": null, 00:14:29.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.332 "is_configured": false, 00:14:29.332 "data_offset": 0, 00:14:29.332 "data_size": 63488 00:14:29.332 }, 00:14:29.332 { 00:14:29.332 "name": "BaseBdev2", 00:14:29.332 "uuid": "23de5546-1dc6-5efb-a41d-88eb27514d9a", 00:14:29.332 "is_configured": true, 00:14:29.332 "data_offset": 2048, 00:14:29.332 "data_size": 63488 00:14:29.332 }, 00:14:29.332 { 00:14:29.332 "name": "BaseBdev3", 00:14:29.332 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:29.332 "is_configured": true, 00:14:29.332 "data_offset": 2048, 00:14:29.332 "data_size": 63488 00:14:29.332 }, 00:14:29.332 { 00:14:29.332 "name": "BaseBdev4", 00:14:29.332 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:29.332 "is_configured": true, 00:14:29.332 "data_offset": 2048, 00:14:29.332 "data_size": 63488 00:14:29.332 } 00:14:29.332 ] 00:14:29.332 }' 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.332 09:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.593 09:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.854 "name": "raid_bdev1", 00:14:29.854 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:29.854 "strip_size_kb": 0, 00:14:29.854 "state": "online", 00:14:29.854 "raid_level": "raid1", 00:14:29.854 "superblock": true, 00:14:29.854 "num_base_bdevs": 4, 00:14:29.854 "num_base_bdevs_discovered": 3, 00:14:29.854 "num_base_bdevs_operational": 3, 00:14:29.854 "base_bdevs_list": [ 00:14:29.854 { 00:14:29.854 "name": null, 00:14:29.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.854 "is_configured": false, 00:14:29.854 "data_offset": 0, 00:14:29.854 "data_size": 63488 00:14:29.854 }, 00:14:29.854 { 00:14:29.854 "name": "BaseBdev2", 00:14:29.854 "uuid": "23de5546-1dc6-5efb-a41d-88eb27514d9a", 00:14:29.854 "is_configured": true, 00:14:29.854 "data_offset": 2048, 00:14:29.854 "data_size": 63488 00:14:29.854 }, 00:14:29.854 { 00:14:29.854 "name": "BaseBdev3", 00:14:29.854 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:29.854 "is_configured": true, 00:14:29.854 "data_offset": 2048, 00:14:29.854 "data_size": 63488 00:14:29.854 }, 00:14:29.854 { 00:14:29.854 "name": "BaseBdev4", 00:14:29.854 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:29.854 "is_configured": true, 00:14:29.854 "data_offset": 2048, 00:14:29.854 "data_size": 63488 00:14:29.854 } 00:14:29.854 ] 00:14:29.854 }' 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.854 [2024-11-20 09:26:55.170299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.854 [2024-11-20 09:26:55.186010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.854 09:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:29.854 [2024-11-20 09:26:55.188161] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.790 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.048 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.048 "name": "raid_bdev1", 00:14:31.048 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:31.048 "strip_size_kb": 0, 00:14:31.048 "state": "online", 00:14:31.048 "raid_level": "raid1", 00:14:31.048 "superblock": true, 00:14:31.048 "num_base_bdevs": 4, 00:14:31.048 "num_base_bdevs_discovered": 4, 00:14:31.048 "num_base_bdevs_operational": 4, 00:14:31.048 "process": { 00:14:31.048 "type": "rebuild", 00:14:31.048 "target": "spare", 00:14:31.048 "progress": { 00:14:31.048 "blocks": 20480, 00:14:31.048 "percent": 32 00:14:31.048 } 00:14:31.048 }, 00:14:31.048 "base_bdevs_list": [ 00:14:31.048 { 00:14:31.048 "name": "spare", 00:14:31.048 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:31.048 "is_configured": true, 00:14:31.048 "data_offset": 2048, 00:14:31.048 "data_size": 63488 00:14:31.048 }, 00:14:31.048 { 00:14:31.048 "name": "BaseBdev2", 00:14:31.048 "uuid": "23de5546-1dc6-5efb-a41d-88eb27514d9a", 00:14:31.048 "is_configured": true, 00:14:31.048 "data_offset": 2048, 00:14:31.048 "data_size": 63488 00:14:31.048 }, 00:14:31.048 { 00:14:31.048 "name": "BaseBdev3", 00:14:31.048 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:31.048 "is_configured": true, 00:14:31.048 "data_offset": 2048, 00:14:31.048 "data_size": 63488 00:14:31.048 }, 00:14:31.048 { 00:14:31.048 "name": "BaseBdev4", 00:14:31.048 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:31.048 "is_configured": true, 00:14:31.048 "data_offset": 2048, 00:14:31.048 "data_size": 63488 00:14:31.048 } 00:14:31.048 ] 00:14:31.048 }' 00:14:31.048 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.048 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.048 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.048 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.048 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:31.048 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:31.049 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:31.049 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:31.049 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:31.049 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:31.049 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:31.049 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.049 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.049 [2024-11-20 09:26:56.332098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.049 [2024-11-20 09:26:56.497951] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:31.049 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.049 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:31.307 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:31.307 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.307 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.307 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.307 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.307 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.307 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.307 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.307 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.308 "name": "raid_bdev1", 00:14:31.308 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:31.308 "strip_size_kb": 0, 00:14:31.308 "state": "online", 00:14:31.308 "raid_level": "raid1", 00:14:31.308 "superblock": true, 00:14:31.308 "num_base_bdevs": 4, 00:14:31.308 "num_base_bdevs_discovered": 3, 00:14:31.308 "num_base_bdevs_operational": 3, 00:14:31.308 "process": { 00:14:31.308 "type": "rebuild", 00:14:31.308 "target": "spare", 00:14:31.308 "progress": { 00:14:31.308 "blocks": 24576, 00:14:31.308 "percent": 38 00:14:31.308 } 00:14:31.308 }, 00:14:31.308 "base_bdevs_list": [ 00:14:31.308 { 00:14:31.308 "name": "spare", 00:14:31.308 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:31.308 "is_configured": true, 00:14:31.308 "data_offset": 2048, 00:14:31.308 "data_size": 63488 00:14:31.308 }, 00:14:31.308 { 00:14:31.308 "name": null, 00:14:31.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.308 "is_configured": false, 00:14:31.308 "data_offset": 0, 00:14:31.308 "data_size": 63488 00:14:31.308 }, 00:14:31.308 { 00:14:31.308 "name": "BaseBdev3", 00:14:31.308 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:31.308 "is_configured": true, 00:14:31.308 "data_offset": 2048, 00:14:31.308 "data_size": 63488 00:14:31.308 }, 00:14:31.308 { 00:14:31.308 "name": "BaseBdev4", 00:14:31.308 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:31.308 "is_configured": true, 00:14:31.308 "data_offset": 2048, 00:14:31.308 "data_size": 63488 00:14:31.308 } 00:14:31.308 ] 00:14:31.308 }' 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=491 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.308 "name": "raid_bdev1", 00:14:31.308 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:31.308 "strip_size_kb": 0, 00:14:31.308 "state": "online", 00:14:31.308 "raid_level": "raid1", 00:14:31.308 "superblock": true, 00:14:31.308 "num_base_bdevs": 4, 00:14:31.308 "num_base_bdevs_discovered": 3, 00:14:31.308 "num_base_bdevs_operational": 3, 00:14:31.308 "process": { 00:14:31.308 "type": "rebuild", 00:14:31.308 "target": "spare", 00:14:31.308 "progress": { 00:14:31.308 "blocks": 26624, 00:14:31.308 "percent": 41 00:14:31.308 } 00:14:31.308 }, 00:14:31.308 "base_bdevs_list": [ 00:14:31.308 { 00:14:31.308 "name": "spare", 00:14:31.308 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:31.308 "is_configured": true, 00:14:31.308 "data_offset": 2048, 00:14:31.308 "data_size": 63488 00:14:31.308 }, 00:14:31.308 { 00:14:31.308 "name": null, 00:14:31.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.308 "is_configured": false, 00:14:31.308 "data_offset": 0, 00:14:31.308 "data_size": 63488 00:14:31.308 }, 00:14:31.308 { 00:14:31.308 "name": "BaseBdev3", 00:14:31.308 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:31.308 "is_configured": true, 00:14:31.308 "data_offset": 2048, 00:14:31.308 "data_size": 63488 00:14:31.308 }, 00:14:31.308 { 00:14:31.308 "name": "BaseBdev4", 00:14:31.308 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:31.308 "is_configured": true, 00:14:31.308 "data_offset": 2048, 00:14:31.308 "data_size": 63488 00:14:31.308 } 00:14:31.308 ] 00:14:31.308 }' 00:14:31.308 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.568 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.568 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.568 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.568 09:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.505 "name": "raid_bdev1", 00:14:32.505 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:32.505 "strip_size_kb": 0, 00:14:32.505 "state": "online", 00:14:32.505 "raid_level": "raid1", 00:14:32.505 "superblock": true, 00:14:32.505 "num_base_bdevs": 4, 00:14:32.505 "num_base_bdevs_discovered": 3, 00:14:32.505 "num_base_bdevs_operational": 3, 00:14:32.505 "process": { 00:14:32.505 "type": "rebuild", 00:14:32.505 "target": "spare", 00:14:32.505 "progress": { 00:14:32.505 "blocks": 51200, 00:14:32.505 "percent": 80 00:14:32.505 } 00:14:32.505 }, 00:14:32.505 "base_bdevs_list": [ 00:14:32.505 { 00:14:32.505 "name": "spare", 00:14:32.505 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:32.505 "is_configured": true, 00:14:32.505 "data_offset": 2048, 00:14:32.505 "data_size": 63488 00:14:32.505 }, 00:14:32.505 { 00:14:32.505 "name": null, 00:14:32.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.505 "is_configured": false, 00:14:32.505 "data_offset": 0, 00:14:32.505 "data_size": 63488 00:14:32.505 }, 00:14:32.505 { 00:14:32.505 "name": "BaseBdev3", 00:14:32.505 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:32.505 "is_configured": true, 00:14:32.505 "data_offset": 2048, 00:14:32.505 "data_size": 63488 00:14:32.505 }, 00:14:32.505 { 00:14:32.505 "name": "BaseBdev4", 00:14:32.505 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:32.505 "is_configured": true, 00:14:32.505 "data_offset": 2048, 00:14:32.505 "data_size": 63488 00:14:32.505 } 00:14:32.505 ] 00:14:32.505 }' 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.505 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.764 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.764 09:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.022 [2024-11-20 09:26:58.415093] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:33.022 [2024-11-20 09:26:58.415341] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:33.022 [2024-11-20 09:26:58.415569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.589 09:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.589 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.589 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.589 "name": "raid_bdev1", 00:14:33.589 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:33.589 "strip_size_kb": 0, 00:14:33.589 "state": "online", 00:14:33.589 "raid_level": "raid1", 00:14:33.589 "superblock": true, 00:14:33.589 "num_base_bdevs": 4, 00:14:33.589 "num_base_bdevs_discovered": 3, 00:14:33.589 "num_base_bdevs_operational": 3, 00:14:33.589 "base_bdevs_list": [ 00:14:33.589 { 00:14:33.589 "name": "spare", 00:14:33.589 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:33.589 "is_configured": true, 00:14:33.589 "data_offset": 2048, 00:14:33.589 "data_size": 63488 00:14:33.589 }, 00:14:33.589 { 00:14:33.589 "name": null, 00:14:33.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.589 "is_configured": false, 00:14:33.589 "data_offset": 0, 00:14:33.589 "data_size": 63488 00:14:33.589 }, 00:14:33.589 { 00:14:33.589 "name": "BaseBdev3", 00:14:33.589 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:33.589 "is_configured": true, 00:14:33.589 "data_offset": 2048, 00:14:33.589 "data_size": 63488 00:14:33.589 }, 00:14:33.589 { 00:14:33.589 "name": "BaseBdev4", 00:14:33.589 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:33.589 "is_configured": true, 00:14:33.589 "data_offset": 2048, 00:14:33.589 "data_size": 63488 00:14:33.589 } 00:14:33.589 ] 00:14:33.589 }' 00:14:33.589 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.894 "name": "raid_bdev1", 00:14:33.894 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:33.894 "strip_size_kb": 0, 00:14:33.894 "state": "online", 00:14:33.894 "raid_level": "raid1", 00:14:33.894 "superblock": true, 00:14:33.894 "num_base_bdevs": 4, 00:14:33.894 "num_base_bdevs_discovered": 3, 00:14:33.894 "num_base_bdevs_operational": 3, 00:14:33.894 "base_bdevs_list": [ 00:14:33.894 { 00:14:33.894 "name": "spare", 00:14:33.894 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:33.894 "is_configured": true, 00:14:33.894 "data_offset": 2048, 00:14:33.894 "data_size": 63488 00:14:33.894 }, 00:14:33.894 { 00:14:33.894 "name": null, 00:14:33.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.894 "is_configured": false, 00:14:33.894 "data_offset": 0, 00:14:33.894 "data_size": 63488 00:14:33.894 }, 00:14:33.894 { 00:14:33.894 "name": "BaseBdev3", 00:14:33.894 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:33.894 "is_configured": true, 00:14:33.894 "data_offset": 2048, 00:14:33.894 "data_size": 63488 00:14:33.894 }, 00:14:33.894 { 00:14:33.894 "name": "BaseBdev4", 00:14:33.894 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:33.894 "is_configured": true, 00:14:33.894 "data_offset": 2048, 00:14:33.894 "data_size": 63488 00:14:33.894 } 00:14:33.894 ] 00:14:33.894 }' 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.894 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.895 "name": "raid_bdev1", 00:14:33.895 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:33.895 "strip_size_kb": 0, 00:14:33.895 "state": "online", 00:14:33.895 "raid_level": "raid1", 00:14:33.895 "superblock": true, 00:14:33.895 "num_base_bdevs": 4, 00:14:33.895 "num_base_bdevs_discovered": 3, 00:14:33.895 "num_base_bdevs_operational": 3, 00:14:33.895 "base_bdevs_list": [ 00:14:33.895 { 00:14:33.895 "name": "spare", 00:14:33.895 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:33.895 "is_configured": true, 00:14:33.895 "data_offset": 2048, 00:14:33.895 "data_size": 63488 00:14:33.895 }, 00:14:33.895 { 00:14:33.895 "name": null, 00:14:33.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.895 "is_configured": false, 00:14:33.895 "data_offset": 0, 00:14:33.895 "data_size": 63488 00:14:33.895 }, 00:14:33.895 { 00:14:33.895 "name": "BaseBdev3", 00:14:33.895 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:33.895 "is_configured": true, 00:14:33.895 "data_offset": 2048, 00:14:33.895 "data_size": 63488 00:14:33.895 }, 00:14:33.895 { 00:14:33.895 "name": "BaseBdev4", 00:14:33.895 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:33.895 "is_configured": true, 00:14:33.895 "data_offset": 2048, 00:14:33.895 "data_size": 63488 00:14:33.895 } 00:14:33.895 ] 00:14:33.895 }' 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.895 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.461 [2024-11-20 09:26:59.743707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.461 [2024-11-20 09:26:59.743748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.461 [2024-11-20 09:26:59.743861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.461 [2024-11-20 09:26:59.743966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.461 [2024-11-20 09:26:59.743979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:34.461 09:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:34.720 /dev/nbd0 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.720 1+0 records in 00:14:34.720 1+0 records out 00:14:34.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334688 s, 12.2 MB/s 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:34.720 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:34.979 /dev/nbd1 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.979 1+0 records in 00:14:34.979 1+0 records out 00:14:34.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583109 s, 7.0 MB/s 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:34.979 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:35.238 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:35.238 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.238 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:35.238 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.238 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:35.238 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.238 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:35.495 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:35.495 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:35.495 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:35.496 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.496 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.496 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:35.496 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:35.496 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.496 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.496 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:35.756 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:35.756 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:35.756 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:35.756 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.756 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.756 09:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.756 [2024-11-20 09:27:01.021328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.756 [2024-11-20 09:27:01.021394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.756 [2024-11-20 09:27:01.021418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:35.756 [2024-11-20 09:27:01.021451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.756 [2024-11-20 09:27:01.023949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.756 [2024-11-20 09:27:01.024078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.756 [2024-11-20 09:27:01.024222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:35.756 [2024-11-20 09:27:01.024285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.756 [2024-11-20 09:27:01.024505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.756 [2024-11-20 09:27:01.024628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:35.756 spare 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.756 [2024-11-20 09:27:01.124546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:35.756 [2024-11-20 09:27:01.124589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:35.756 [2024-11-20 09:27:01.124956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:35.756 [2024-11-20 09:27:01.125157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:35.756 [2024-11-20 09:27:01.125170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:35.756 [2024-11-20 09:27:01.125355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.756 "name": "raid_bdev1", 00:14:35.756 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:35.756 "strip_size_kb": 0, 00:14:35.756 "state": "online", 00:14:35.756 "raid_level": "raid1", 00:14:35.756 "superblock": true, 00:14:35.756 "num_base_bdevs": 4, 00:14:35.756 "num_base_bdevs_discovered": 3, 00:14:35.756 "num_base_bdevs_operational": 3, 00:14:35.756 "base_bdevs_list": [ 00:14:35.756 { 00:14:35.756 "name": "spare", 00:14:35.756 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:35.756 "is_configured": true, 00:14:35.756 "data_offset": 2048, 00:14:35.756 "data_size": 63488 00:14:35.756 }, 00:14:35.756 { 00:14:35.756 "name": null, 00:14:35.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.756 "is_configured": false, 00:14:35.756 "data_offset": 2048, 00:14:35.756 "data_size": 63488 00:14:35.756 }, 00:14:35.756 { 00:14:35.756 "name": "BaseBdev3", 00:14:35.756 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:35.756 "is_configured": true, 00:14:35.756 "data_offset": 2048, 00:14:35.756 "data_size": 63488 00:14:35.756 }, 00:14:35.756 { 00:14:35.756 "name": "BaseBdev4", 00:14:35.756 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:35.756 "is_configured": true, 00:14:35.756 "data_offset": 2048, 00:14:35.756 "data_size": 63488 00:14:35.756 } 00:14:35.756 ] 00:14:35.756 }' 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.756 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.325 "name": "raid_bdev1", 00:14:36.325 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:36.325 "strip_size_kb": 0, 00:14:36.325 "state": "online", 00:14:36.325 "raid_level": "raid1", 00:14:36.325 "superblock": true, 00:14:36.325 "num_base_bdevs": 4, 00:14:36.325 "num_base_bdevs_discovered": 3, 00:14:36.325 "num_base_bdevs_operational": 3, 00:14:36.325 "base_bdevs_list": [ 00:14:36.325 { 00:14:36.325 "name": "spare", 00:14:36.325 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:36.325 "is_configured": true, 00:14:36.325 "data_offset": 2048, 00:14:36.325 "data_size": 63488 00:14:36.325 }, 00:14:36.325 { 00:14:36.325 "name": null, 00:14:36.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.325 "is_configured": false, 00:14:36.325 "data_offset": 2048, 00:14:36.325 "data_size": 63488 00:14:36.325 }, 00:14:36.325 { 00:14:36.325 "name": "BaseBdev3", 00:14:36.325 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:36.325 "is_configured": true, 00:14:36.325 "data_offset": 2048, 00:14:36.325 "data_size": 63488 00:14:36.325 }, 00:14:36.325 { 00:14:36.325 "name": "BaseBdev4", 00:14:36.325 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:36.325 "is_configured": true, 00:14:36.325 "data_offset": 2048, 00:14:36.325 "data_size": 63488 00:14:36.325 } 00:14:36.325 ] 00:14:36.325 }' 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.325 [2024-11-20 09:27:01.748340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.325 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.584 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.584 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.584 "name": "raid_bdev1", 00:14:36.584 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:36.584 "strip_size_kb": 0, 00:14:36.584 "state": "online", 00:14:36.584 "raid_level": "raid1", 00:14:36.584 "superblock": true, 00:14:36.584 "num_base_bdevs": 4, 00:14:36.584 "num_base_bdevs_discovered": 2, 00:14:36.584 "num_base_bdevs_operational": 2, 00:14:36.584 "base_bdevs_list": [ 00:14:36.584 { 00:14:36.584 "name": null, 00:14:36.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.584 "is_configured": false, 00:14:36.584 "data_offset": 0, 00:14:36.584 "data_size": 63488 00:14:36.584 }, 00:14:36.584 { 00:14:36.584 "name": null, 00:14:36.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.584 "is_configured": false, 00:14:36.584 "data_offset": 2048, 00:14:36.584 "data_size": 63488 00:14:36.584 }, 00:14:36.584 { 00:14:36.584 "name": "BaseBdev3", 00:14:36.584 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:36.584 "is_configured": true, 00:14:36.584 "data_offset": 2048, 00:14:36.584 "data_size": 63488 00:14:36.584 }, 00:14:36.584 { 00:14:36.584 "name": "BaseBdev4", 00:14:36.584 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:36.584 "is_configured": true, 00:14:36.584 "data_offset": 2048, 00:14:36.584 "data_size": 63488 00:14:36.584 } 00:14:36.584 ] 00:14:36.584 }' 00:14:36.584 09:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.584 09:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.843 09:27:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.843 09:27:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.843 09:27:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.843 [2024-11-20 09:27:02.231584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.843 [2024-11-20 09:27:02.231789] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:36.843 [2024-11-20 09:27:02.231807] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:36.843 [2024-11-20 09:27:02.231850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.843 [2024-11-20 09:27:02.246976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:36.843 09:27:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.843 09:27:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:36.843 [2024-11-20 09:27:02.248999] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.223 "name": "raid_bdev1", 00:14:38.223 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:38.223 "strip_size_kb": 0, 00:14:38.223 "state": "online", 00:14:38.223 "raid_level": "raid1", 00:14:38.223 "superblock": true, 00:14:38.223 "num_base_bdevs": 4, 00:14:38.223 "num_base_bdevs_discovered": 3, 00:14:38.223 "num_base_bdevs_operational": 3, 00:14:38.223 "process": { 00:14:38.223 "type": "rebuild", 00:14:38.223 "target": "spare", 00:14:38.223 "progress": { 00:14:38.223 "blocks": 20480, 00:14:38.223 "percent": 32 00:14:38.223 } 00:14:38.223 }, 00:14:38.223 "base_bdevs_list": [ 00:14:38.223 { 00:14:38.223 "name": "spare", 00:14:38.223 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:38.223 "is_configured": true, 00:14:38.223 "data_offset": 2048, 00:14:38.223 "data_size": 63488 00:14:38.223 }, 00:14:38.223 { 00:14:38.223 "name": null, 00:14:38.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.223 "is_configured": false, 00:14:38.223 "data_offset": 2048, 00:14:38.223 "data_size": 63488 00:14:38.223 }, 00:14:38.223 { 00:14:38.223 "name": "BaseBdev3", 00:14:38.223 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:38.223 "is_configured": true, 00:14:38.223 "data_offset": 2048, 00:14:38.223 "data_size": 63488 00:14:38.223 }, 00:14:38.223 { 00:14:38.223 "name": "BaseBdev4", 00:14:38.223 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:38.223 "is_configured": true, 00:14:38.223 "data_offset": 2048, 00:14:38.223 "data_size": 63488 00:14:38.223 } 00:14:38.223 ] 00:14:38.223 }' 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.223 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.223 [2024-11-20 09:27:03.404639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.223 [2024-11-20 09:27:03.454832] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.223 [2024-11-20 09:27:03.454923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.223 [2024-11-20 09:27:03.454945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.223 [2024-11-20 09:27:03.454952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.224 "name": "raid_bdev1", 00:14:38.224 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:38.224 "strip_size_kb": 0, 00:14:38.224 "state": "online", 00:14:38.224 "raid_level": "raid1", 00:14:38.224 "superblock": true, 00:14:38.224 "num_base_bdevs": 4, 00:14:38.224 "num_base_bdevs_discovered": 2, 00:14:38.224 "num_base_bdevs_operational": 2, 00:14:38.224 "base_bdevs_list": [ 00:14:38.224 { 00:14:38.224 "name": null, 00:14:38.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.224 "is_configured": false, 00:14:38.224 "data_offset": 0, 00:14:38.224 "data_size": 63488 00:14:38.224 }, 00:14:38.224 { 00:14:38.224 "name": null, 00:14:38.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.224 "is_configured": false, 00:14:38.224 "data_offset": 2048, 00:14:38.224 "data_size": 63488 00:14:38.224 }, 00:14:38.224 { 00:14:38.224 "name": "BaseBdev3", 00:14:38.224 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:38.224 "is_configured": true, 00:14:38.224 "data_offset": 2048, 00:14:38.224 "data_size": 63488 00:14:38.224 }, 00:14:38.224 { 00:14:38.224 "name": "BaseBdev4", 00:14:38.224 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:38.224 "is_configured": true, 00:14:38.224 "data_offset": 2048, 00:14:38.224 "data_size": 63488 00:14:38.224 } 00:14:38.224 ] 00:14:38.224 }' 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.224 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.792 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:38.793 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.793 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.793 [2024-11-20 09:27:03.948231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:38.793 [2024-11-20 09:27:03.948372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.793 [2024-11-20 09:27:03.948446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:38.793 [2024-11-20 09:27:03.948485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.793 [2024-11-20 09:27:03.949056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.793 [2024-11-20 09:27:03.949125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:38.793 [2024-11-20 09:27:03.949277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:38.793 [2024-11-20 09:27:03.949326] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:38.793 [2024-11-20 09:27:03.949401] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:38.793 [2024-11-20 09:27:03.949482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.793 [2024-11-20 09:27:03.966279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:38.793 spare 00:14:38.793 09:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.793 09:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:38.793 [2024-11-20 09:27:03.968370] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.731 09:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.731 09:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.731 09:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.731 09:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.731 09:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.731 09:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.731 09:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.731 09:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.731 09:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.731 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.731 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.731 "name": "raid_bdev1", 00:14:39.731 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:39.731 "strip_size_kb": 0, 00:14:39.731 "state": "online", 00:14:39.731 "raid_level": "raid1", 00:14:39.731 "superblock": true, 00:14:39.731 "num_base_bdevs": 4, 00:14:39.731 "num_base_bdevs_discovered": 3, 00:14:39.731 "num_base_bdevs_operational": 3, 00:14:39.731 "process": { 00:14:39.731 "type": "rebuild", 00:14:39.731 "target": "spare", 00:14:39.731 "progress": { 00:14:39.731 "blocks": 20480, 00:14:39.731 "percent": 32 00:14:39.731 } 00:14:39.731 }, 00:14:39.731 "base_bdevs_list": [ 00:14:39.731 { 00:14:39.731 "name": "spare", 00:14:39.731 "uuid": "837970b2-dd82-5aac-9444-d2118a5848fa", 00:14:39.731 "is_configured": true, 00:14:39.731 "data_offset": 2048, 00:14:39.731 "data_size": 63488 00:14:39.731 }, 00:14:39.731 { 00:14:39.731 "name": null, 00:14:39.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.731 "is_configured": false, 00:14:39.731 "data_offset": 2048, 00:14:39.731 "data_size": 63488 00:14:39.731 }, 00:14:39.731 { 00:14:39.731 "name": "BaseBdev3", 00:14:39.731 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:39.731 "is_configured": true, 00:14:39.731 "data_offset": 2048, 00:14:39.731 "data_size": 63488 00:14:39.731 }, 00:14:39.731 { 00:14:39.731 "name": "BaseBdev4", 00:14:39.731 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:39.731 "is_configured": true, 00:14:39.731 "data_offset": 2048, 00:14:39.731 "data_size": 63488 00:14:39.731 } 00:14:39.731 ] 00:14:39.731 }' 00:14:39.731 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.731 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.731 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.731 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.731 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:39.731 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.731 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.731 [2024-11-20 09:27:05.112019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.731 [2024-11-20 09:27:05.174339] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:39.731 [2024-11-20 09:27:05.174420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.731 [2024-11-20 09:27:05.174454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.731 [2024-11-20 09:27:05.174466] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.990 "name": "raid_bdev1", 00:14:39.990 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:39.990 "strip_size_kb": 0, 00:14:39.990 "state": "online", 00:14:39.990 "raid_level": "raid1", 00:14:39.990 "superblock": true, 00:14:39.990 "num_base_bdevs": 4, 00:14:39.990 "num_base_bdevs_discovered": 2, 00:14:39.990 "num_base_bdevs_operational": 2, 00:14:39.990 "base_bdevs_list": [ 00:14:39.990 { 00:14:39.990 "name": null, 00:14:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.990 "is_configured": false, 00:14:39.990 "data_offset": 0, 00:14:39.990 "data_size": 63488 00:14:39.990 }, 00:14:39.990 { 00:14:39.990 "name": null, 00:14:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.990 "is_configured": false, 00:14:39.990 "data_offset": 2048, 00:14:39.990 "data_size": 63488 00:14:39.990 }, 00:14:39.990 { 00:14:39.990 "name": "BaseBdev3", 00:14:39.990 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:39.990 "is_configured": true, 00:14:39.990 "data_offset": 2048, 00:14:39.990 "data_size": 63488 00:14:39.990 }, 00:14:39.990 { 00:14:39.990 "name": "BaseBdev4", 00:14:39.990 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:39.990 "is_configured": true, 00:14:39.990 "data_offset": 2048, 00:14:39.990 "data_size": 63488 00:14:39.990 } 00:14:39.990 ] 00:14:39.990 }' 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.990 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.249 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.250 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.250 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.250 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.250 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.250 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.250 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.250 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.250 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.250 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.508 "name": "raid_bdev1", 00:14:40.508 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:40.508 "strip_size_kb": 0, 00:14:40.508 "state": "online", 00:14:40.508 "raid_level": "raid1", 00:14:40.508 "superblock": true, 00:14:40.508 "num_base_bdevs": 4, 00:14:40.508 "num_base_bdevs_discovered": 2, 00:14:40.508 "num_base_bdevs_operational": 2, 00:14:40.508 "base_bdevs_list": [ 00:14:40.508 { 00:14:40.508 "name": null, 00:14:40.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.508 "is_configured": false, 00:14:40.508 "data_offset": 0, 00:14:40.508 "data_size": 63488 00:14:40.508 }, 00:14:40.508 { 00:14:40.508 "name": null, 00:14:40.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.508 "is_configured": false, 00:14:40.508 "data_offset": 2048, 00:14:40.508 "data_size": 63488 00:14:40.508 }, 00:14:40.508 { 00:14:40.508 "name": "BaseBdev3", 00:14:40.508 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:40.508 "is_configured": true, 00:14:40.508 "data_offset": 2048, 00:14:40.508 "data_size": 63488 00:14:40.508 }, 00:14:40.508 { 00:14:40.508 "name": "BaseBdev4", 00:14:40.508 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:40.508 "is_configured": true, 00:14:40.508 "data_offset": 2048, 00:14:40.508 "data_size": 63488 00:14:40.508 } 00:14:40.508 ] 00:14:40.508 }' 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.508 [2024-11-20 09:27:05.849535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:40.508 [2024-11-20 09:27:05.849625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.508 [2024-11-20 09:27:05.849650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:40.508 [2024-11-20 09:27:05.849662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.508 [2024-11-20 09:27:05.850147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.508 [2024-11-20 09:27:05.850170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:40.508 [2024-11-20 09:27:05.850258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:40.508 [2024-11-20 09:27:05.850277] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:40.508 [2024-11-20 09:27:05.850287] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:40.508 [2024-11-20 09:27:05.850318] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:40.508 BaseBdev1 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.508 09:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.443 09:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.702 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.702 "name": "raid_bdev1", 00:14:41.702 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:41.702 "strip_size_kb": 0, 00:14:41.702 "state": "online", 00:14:41.702 "raid_level": "raid1", 00:14:41.702 "superblock": true, 00:14:41.702 "num_base_bdevs": 4, 00:14:41.702 "num_base_bdevs_discovered": 2, 00:14:41.702 "num_base_bdevs_operational": 2, 00:14:41.702 "base_bdevs_list": [ 00:14:41.702 { 00:14:41.702 "name": null, 00:14:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.702 "is_configured": false, 00:14:41.703 "data_offset": 0, 00:14:41.703 "data_size": 63488 00:14:41.703 }, 00:14:41.703 { 00:14:41.703 "name": null, 00:14:41.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.703 "is_configured": false, 00:14:41.703 "data_offset": 2048, 00:14:41.703 "data_size": 63488 00:14:41.703 }, 00:14:41.703 { 00:14:41.703 "name": "BaseBdev3", 00:14:41.703 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:41.703 "is_configured": true, 00:14:41.703 "data_offset": 2048, 00:14:41.703 "data_size": 63488 00:14:41.703 }, 00:14:41.703 { 00:14:41.703 "name": "BaseBdev4", 00:14:41.703 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:41.703 "is_configured": true, 00:14:41.703 "data_offset": 2048, 00:14:41.703 "data_size": 63488 00:14:41.703 } 00:14:41.703 ] 00:14:41.703 }' 00:14:41.703 09:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.703 09:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.962 "name": "raid_bdev1", 00:14:41.962 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:41.962 "strip_size_kb": 0, 00:14:41.962 "state": "online", 00:14:41.962 "raid_level": "raid1", 00:14:41.962 "superblock": true, 00:14:41.962 "num_base_bdevs": 4, 00:14:41.962 "num_base_bdevs_discovered": 2, 00:14:41.962 "num_base_bdevs_operational": 2, 00:14:41.962 "base_bdevs_list": [ 00:14:41.962 { 00:14:41.962 "name": null, 00:14:41.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.962 "is_configured": false, 00:14:41.962 "data_offset": 0, 00:14:41.962 "data_size": 63488 00:14:41.962 }, 00:14:41.962 { 00:14:41.962 "name": null, 00:14:41.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.962 "is_configured": false, 00:14:41.962 "data_offset": 2048, 00:14:41.962 "data_size": 63488 00:14:41.962 }, 00:14:41.962 { 00:14:41.962 "name": "BaseBdev3", 00:14:41.962 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:41.962 "is_configured": true, 00:14:41.962 "data_offset": 2048, 00:14:41.962 "data_size": 63488 00:14:41.962 }, 00:14:41.962 { 00:14:41.962 "name": "BaseBdev4", 00:14:41.962 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:41.962 "is_configured": true, 00:14:41.962 "data_offset": 2048, 00:14:41.962 "data_size": 63488 00:14:41.962 } 00:14:41.962 ] 00:14:41.962 }' 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.962 [2024-11-20 09:27:07.335546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.962 [2024-11-20 09:27:07.335836] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:41.962 [2024-11-20 09:27:07.335911] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:41.962 request: 00:14:41.962 { 00:14:41.962 "base_bdev": "BaseBdev1", 00:14:41.962 "raid_bdev": "raid_bdev1", 00:14:41.962 "method": "bdev_raid_add_base_bdev", 00:14:41.962 "req_id": 1 00:14:41.962 } 00:14:41.962 Got JSON-RPC error response 00:14:41.962 response: 00:14:41.962 { 00:14:41.962 "code": -22, 00:14:41.962 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:41.962 } 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:41.962 09:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:42.899 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.899 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.899 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.899 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.899 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.899 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.899 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.899 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.899 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.900 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.159 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.159 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.159 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.159 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.159 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.159 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.159 "name": "raid_bdev1", 00:14:43.159 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:43.159 "strip_size_kb": 0, 00:14:43.159 "state": "online", 00:14:43.159 "raid_level": "raid1", 00:14:43.159 "superblock": true, 00:14:43.159 "num_base_bdevs": 4, 00:14:43.159 "num_base_bdevs_discovered": 2, 00:14:43.159 "num_base_bdevs_operational": 2, 00:14:43.159 "base_bdevs_list": [ 00:14:43.159 { 00:14:43.159 "name": null, 00:14:43.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.159 "is_configured": false, 00:14:43.159 "data_offset": 0, 00:14:43.159 "data_size": 63488 00:14:43.159 }, 00:14:43.159 { 00:14:43.159 "name": null, 00:14:43.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.159 "is_configured": false, 00:14:43.159 "data_offset": 2048, 00:14:43.159 "data_size": 63488 00:14:43.159 }, 00:14:43.159 { 00:14:43.159 "name": "BaseBdev3", 00:14:43.159 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:43.159 "is_configured": true, 00:14:43.159 "data_offset": 2048, 00:14:43.159 "data_size": 63488 00:14:43.159 }, 00:14:43.159 { 00:14:43.159 "name": "BaseBdev4", 00:14:43.159 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:43.159 "is_configured": true, 00:14:43.159 "data_offset": 2048, 00:14:43.159 "data_size": 63488 00:14:43.159 } 00:14:43.159 ] 00:14:43.159 }' 00:14:43.159 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.159 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.418 "name": "raid_bdev1", 00:14:43.418 "uuid": "6422f794-048e-4876-b76c-28e01e4de49b", 00:14:43.418 "strip_size_kb": 0, 00:14:43.418 "state": "online", 00:14:43.418 "raid_level": "raid1", 00:14:43.418 "superblock": true, 00:14:43.418 "num_base_bdevs": 4, 00:14:43.418 "num_base_bdevs_discovered": 2, 00:14:43.418 "num_base_bdevs_operational": 2, 00:14:43.418 "base_bdevs_list": [ 00:14:43.418 { 00:14:43.418 "name": null, 00:14:43.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.418 "is_configured": false, 00:14:43.418 "data_offset": 0, 00:14:43.418 "data_size": 63488 00:14:43.418 }, 00:14:43.418 { 00:14:43.418 "name": null, 00:14:43.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.418 "is_configured": false, 00:14:43.418 "data_offset": 2048, 00:14:43.418 "data_size": 63488 00:14:43.418 }, 00:14:43.418 { 00:14:43.418 "name": "BaseBdev3", 00:14:43.418 "uuid": "4bfbbeab-0755-531a-aee5-7057ab6a9978", 00:14:43.418 "is_configured": true, 00:14:43.418 "data_offset": 2048, 00:14:43.418 "data_size": 63488 00:14:43.418 }, 00:14:43.418 { 00:14:43.418 "name": "BaseBdev4", 00:14:43.418 "uuid": "ee57091f-5730-59b2-8a8d-8592dcb948b8", 00:14:43.418 "is_configured": true, 00:14:43.418 "data_offset": 2048, 00:14:43.418 "data_size": 63488 00:14:43.418 } 00:14:43.418 ] 00:14:43.418 }' 00:14:43.418 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78378 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78378 ']' 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78378 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78378 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.675 killing process with pid 78378 00:14:43.675 Received shutdown signal, test time was about 60.000000 seconds 00:14:43.675 00:14:43.675 Latency(us) 00:14:43.675 [2024-11-20T09:27:09.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.675 [2024-11-20T09:27:09.131Z] =================================================================================================================== 00:14:43.675 [2024-11-20T09:27:09.131Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78378' 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78378 00:14:43.675 [2024-11-20 09:27:08.938374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.675 09:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78378 00:14:43.675 [2024-11-20 09:27:08.938527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.675 [2024-11-20 09:27:08.938610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.675 [2024-11-20 09:27:08.938621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:44.239 [2024-11-20 09:27:09.478343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:45.614 00:14:45.614 real 0m26.062s 00:14:45.614 user 0m31.475s 00:14:45.614 sys 0m3.811s 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.614 ************************************ 00:14:45.614 END TEST raid_rebuild_test_sb 00:14:45.614 ************************************ 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.614 09:27:10 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:45.614 09:27:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:45.614 09:27:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.614 09:27:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:45.614 ************************************ 00:14:45.614 START TEST raid_rebuild_test_io 00:14:45.614 ************************************ 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:45.614 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79137 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79137 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79137 ']' 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.615 09:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.615 [2024-11-20 09:27:10.854287] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:45.615 [2024-11-20 09:27:10.854539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79137 ] 00:14:45.615 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:45.615 Zero copy mechanism will not be used. 00:14:45.615 [2024-11-20 09:27:11.031496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.874 [2024-11-20 09:27:11.163322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.132 [2024-11-20 09:27:11.378509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.132 [2024-11-20 09:27:11.378663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.392 BaseBdev1_malloc 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.392 [2024-11-20 09:27:11.776519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:46.392 [2024-11-20 09:27:11.776624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.392 [2024-11-20 09:27:11.776648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:46.392 [2024-11-20 09:27:11.776660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.392 [2024-11-20 09:27:11.778938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.392 [2024-11-20 09:27:11.778981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:46.392 BaseBdev1 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.392 BaseBdev2_malloc 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.392 [2024-11-20 09:27:11.830628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:46.392 [2024-11-20 09:27:11.830701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.392 [2024-11-20 09:27:11.830721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:46.392 [2024-11-20 09:27:11.830735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.392 [2024-11-20 09:27:11.833043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.392 [2024-11-20 09:27:11.833087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:46.392 BaseBdev2 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.392 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.651 BaseBdev3_malloc 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.651 [2024-11-20 09:27:11.897644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:46.651 [2024-11-20 09:27:11.897790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.651 [2024-11-20 09:27:11.897855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:46.651 [2024-11-20 09:27:11.897904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.651 [2024-11-20 09:27:11.900302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.651 [2024-11-20 09:27:11.900405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:46.651 BaseBdev3 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.651 BaseBdev4_malloc 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.651 [2024-11-20 09:27:11.952248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:46.651 [2024-11-20 09:27:11.952320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.651 [2024-11-20 09:27:11.952347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:46.651 [2024-11-20 09:27:11.952360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.651 [2024-11-20 09:27:11.954901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.651 [2024-11-20 09:27:11.954953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:46.651 BaseBdev4 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.651 09:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:46.652 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.652 09:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.652 spare_malloc 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.652 spare_delay 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.652 [2024-11-20 09:27:12.021247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:46.652 [2024-11-20 09:27:12.021321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.652 [2024-11-20 09:27:12.021349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:46.652 [2024-11-20 09:27:12.021362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.652 [2024-11-20 09:27:12.023867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.652 [2024-11-20 09:27:12.023967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:46.652 spare 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.652 [2024-11-20 09:27:12.033269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.652 [2024-11-20 09:27:12.035378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.652 [2024-11-20 09:27:12.035563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.652 [2024-11-20 09:27:12.035634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:46.652 [2024-11-20 09:27:12.035732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:46.652 [2024-11-20 09:27:12.035749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:46.652 [2024-11-20 09:27:12.036057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:46.652 [2024-11-20 09:27:12.036267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:46.652 [2024-11-20 09:27:12.036282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:46.652 [2024-11-20 09:27:12.036479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.652 "name": "raid_bdev1", 00:14:46.652 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:46.652 "strip_size_kb": 0, 00:14:46.652 "state": "online", 00:14:46.652 "raid_level": "raid1", 00:14:46.652 "superblock": false, 00:14:46.652 "num_base_bdevs": 4, 00:14:46.652 "num_base_bdevs_discovered": 4, 00:14:46.652 "num_base_bdevs_operational": 4, 00:14:46.652 "base_bdevs_list": [ 00:14:46.652 { 00:14:46.652 "name": "BaseBdev1", 00:14:46.652 "uuid": "080cf93d-c048-593a-a718-035e10893794", 00:14:46.652 "is_configured": true, 00:14:46.652 "data_offset": 0, 00:14:46.652 "data_size": 65536 00:14:46.652 }, 00:14:46.652 { 00:14:46.652 "name": "BaseBdev2", 00:14:46.652 "uuid": "390f7cf7-d507-549d-a232-6e5cc18ae742", 00:14:46.652 "is_configured": true, 00:14:46.652 "data_offset": 0, 00:14:46.652 "data_size": 65536 00:14:46.652 }, 00:14:46.652 { 00:14:46.652 "name": "BaseBdev3", 00:14:46.652 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:46.652 "is_configured": true, 00:14:46.652 "data_offset": 0, 00:14:46.652 "data_size": 65536 00:14:46.652 }, 00:14:46.652 { 00:14:46.652 "name": "BaseBdev4", 00:14:46.652 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:46.652 "is_configured": true, 00:14:46.652 "data_offset": 0, 00:14:46.652 "data_size": 65536 00:14:46.652 } 00:14:46.652 ] 00:14:46.652 }' 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.652 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.221 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.222 [2024-11-20 09:27:12.512861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.222 [2024-11-20 09:27:12.608288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.222 "name": "raid_bdev1", 00:14:47.222 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:47.222 "strip_size_kb": 0, 00:14:47.222 "state": "online", 00:14:47.222 "raid_level": "raid1", 00:14:47.222 "superblock": false, 00:14:47.222 "num_base_bdevs": 4, 00:14:47.222 "num_base_bdevs_discovered": 3, 00:14:47.222 "num_base_bdevs_operational": 3, 00:14:47.222 "base_bdevs_list": [ 00:14:47.222 { 00:14:47.222 "name": null, 00:14:47.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.222 "is_configured": false, 00:14:47.222 "data_offset": 0, 00:14:47.222 "data_size": 65536 00:14:47.222 }, 00:14:47.222 { 00:14:47.222 "name": "BaseBdev2", 00:14:47.222 "uuid": "390f7cf7-d507-549d-a232-6e5cc18ae742", 00:14:47.222 "is_configured": true, 00:14:47.222 "data_offset": 0, 00:14:47.222 "data_size": 65536 00:14:47.222 }, 00:14:47.222 { 00:14:47.222 "name": "BaseBdev3", 00:14:47.222 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:47.222 "is_configured": true, 00:14:47.222 "data_offset": 0, 00:14:47.222 "data_size": 65536 00:14:47.222 }, 00:14:47.222 { 00:14:47.222 "name": "BaseBdev4", 00:14:47.222 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:47.222 "is_configured": true, 00:14:47.222 "data_offset": 0, 00:14:47.222 "data_size": 65536 00:14:47.222 } 00:14:47.222 ] 00:14:47.222 }' 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.222 09:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.481 [2024-11-20 09:27:12.729556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:47.481 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:47.481 Zero copy mechanism will not be used. 00:14:47.481 Running I/O for 60 seconds... 00:14:47.740 09:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.740 09:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.740 09:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.740 [2024-11-20 09:27:13.112111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.740 09:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.740 09:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:47.999 [2024-11-20 09:27:13.199293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:47.999 [2024-11-20 09:27:13.201644] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.999 [2024-11-20 09:27:13.320477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.999 [2024-11-20 09:27:13.321154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:48.259 [2024-11-20 09:27:13.532307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:48.259 [2024-11-20 09:27:13.532768] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:48.518 137.00 IOPS, 411.00 MiB/s [2024-11-20T09:27:13.974Z] [2024-11-20 09:27:13.808913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:48.518 [2024-11-20 09:27:13.809522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:48.518 [2024-11-20 09:27:13.918993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:48.518 [2024-11-20 09:27:13.919336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.777 "name": "raid_bdev1", 00:14:48.777 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:48.777 "strip_size_kb": 0, 00:14:48.777 "state": "online", 00:14:48.777 "raid_level": "raid1", 00:14:48.777 "superblock": false, 00:14:48.777 "num_base_bdevs": 4, 00:14:48.777 "num_base_bdevs_discovered": 4, 00:14:48.777 "num_base_bdevs_operational": 4, 00:14:48.777 "process": { 00:14:48.777 "type": "rebuild", 00:14:48.777 "target": "spare", 00:14:48.777 "progress": { 00:14:48.777 "blocks": 14336, 00:14:48.777 "percent": 21 00:14:48.777 } 00:14:48.777 }, 00:14:48.777 "base_bdevs_list": [ 00:14:48.777 { 00:14:48.777 "name": "spare", 00:14:48.777 "uuid": "9b22b276-e7ab-5f7d-a316-c56602105763", 00:14:48.777 "is_configured": true, 00:14:48.777 "data_offset": 0, 00:14:48.777 "data_size": 65536 00:14:48.777 }, 00:14:48.777 { 00:14:48.777 "name": "BaseBdev2", 00:14:48.777 "uuid": "390f7cf7-d507-549d-a232-6e5cc18ae742", 00:14:48.777 "is_configured": true, 00:14:48.777 "data_offset": 0, 00:14:48.777 "data_size": 65536 00:14:48.777 }, 00:14:48.777 { 00:14:48.777 "name": "BaseBdev3", 00:14:48.777 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:48.777 "is_configured": true, 00:14:48.777 "data_offset": 0, 00:14:48.777 "data_size": 65536 00:14:48.777 }, 00:14:48.777 { 00:14:48.777 "name": "BaseBdev4", 00:14:48.777 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:48.777 "is_configured": true, 00:14:48.777 "data_offset": 0, 00:14:48.777 "data_size": 65536 00:14:48.777 } 00:14:48.777 ] 00:14:48.777 }' 00:14:48.777 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.036 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.036 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.036 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.036 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:49.036 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.036 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.036 [2024-11-20 09:27:14.322448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.036 [2024-11-20 09:27:14.429114] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:49.036 [2024-11-20 09:27:14.440029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.036 [2024-11-20 09:27:14.440081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.036 [2024-11-20 09:27:14.440098] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:49.036 [2024-11-20 09:27:14.479722] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.296 "name": "raid_bdev1", 00:14:49.296 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:49.296 "strip_size_kb": 0, 00:14:49.296 "state": "online", 00:14:49.296 "raid_level": "raid1", 00:14:49.296 "superblock": false, 00:14:49.296 "num_base_bdevs": 4, 00:14:49.296 "num_base_bdevs_discovered": 3, 00:14:49.296 "num_base_bdevs_operational": 3, 00:14:49.296 "base_bdevs_list": [ 00:14:49.296 { 00:14:49.296 "name": null, 00:14:49.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.296 "is_configured": false, 00:14:49.296 "data_offset": 0, 00:14:49.296 "data_size": 65536 00:14:49.296 }, 00:14:49.296 { 00:14:49.296 "name": "BaseBdev2", 00:14:49.296 "uuid": "390f7cf7-d507-549d-a232-6e5cc18ae742", 00:14:49.296 "is_configured": true, 00:14:49.296 "data_offset": 0, 00:14:49.296 "data_size": 65536 00:14:49.296 }, 00:14:49.296 { 00:14:49.296 "name": "BaseBdev3", 00:14:49.296 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:49.296 "is_configured": true, 00:14:49.296 "data_offset": 0, 00:14:49.296 "data_size": 65536 00:14:49.296 }, 00:14:49.296 { 00:14:49.296 "name": "BaseBdev4", 00:14:49.296 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:49.296 "is_configured": true, 00:14:49.296 "data_offset": 0, 00:14:49.296 "data_size": 65536 00:14:49.296 } 00:14:49.296 ] 00:14:49.296 }' 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.296 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.556 133.00 IOPS, 399.00 MiB/s [2024-11-20T09:27:15.012Z] 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.556 "name": "raid_bdev1", 00:14:49.556 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:49.556 "strip_size_kb": 0, 00:14:49.556 "state": "online", 00:14:49.556 "raid_level": "raid1", 00:14:49.556 "superblock": false, 00:14:49.556 "num_base_bdevs": 4, 00:14:49.556 "num_base_bdevs_discovered": 3, 00:14:49.556 "num_base_bdevs_operational": 3, 00:14:49.556 "base_bdevs_list": [ 00:14:49.556 { 00:14:49.556 "name": null, 00:14:49.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.556 "is_configured": false, 00:14:49.556 "data_offset": 0, 00:14:49.556 "data_size": 65536 00:14:49.556 }, 00:14:49.556 { 00:14:49.556 "name": "BaseBdev2", 00:14:49.556 "uuid": "390f7cf7-d507-549d-a232-6e5cc18ae742", 00:14:49.556 "is_configured": true, 00:14:49.556 "data_offset": 0, 00:14:49.556 "data_size": 65536 00:14:49.556 }, 00:14:49.556 { 00:14:49.556 "name": "BaseBdev3", 00:14:49.556 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:49.556 "is_configured": true, 00:14:49.556 "data_offset": 0, 00:14:49.556 "data_size": 65536 00:14:49.556 }, 00:14:49.556 { 00:14:49.556 "name": "BaseBdev4", 00:14:49.556 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:49.556 "is_configured": true, 00:14:49.556 "data_offset": 0, 00:14:49.556 "data_size": 65536 00:14:49.556 } 00:14:49.556 ] 00:14:49.556 }' 00:14:49.556 09:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.816 09:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.816 09:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.816 09:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.816 09:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.816 09:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.816 09:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.816 [2024-11-20 09:27:15.078730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.816 09:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.816 09:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:49.816 [2024-11-20 09:27:15.153685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:49.816 [2024-11-20 09:27:15.155908] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.076 [2024-11-20 09:27:15.274625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:50.076 [2024-11-20 09:27:15.276161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:50.076 [2024-11-20 09:27:15.488167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:50.076 [2024-11-20 09:27:15.488658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:50.593 142.00 IOPS, 426.00 MiB/s [2024-11-20T09:27:16.049Z] [2024-11-20 09:27:15.883162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.852 "name": "raid_bdev1", 00:14:50.852 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:50.852 "strip_size_kb": 0, 00:14:50.852 "state": "online", 00:14:50.852 "raid_level": "raid1", 00:14:50.852 "superblock": false, 00:14:50.852 "num_base_bdevs": 4, 00:14:50.852 "num_base_bdevs_discovered": 4, 00:14:50.852 "num_base_bdevs_operational": 4, 00:14:50.852 "process": { 00:14:50.852 "type": "rebuild", 00:14:50.852 "target": "spare", 00:14:50.852 "progress": { 00:14:50.852 "blocks": 12288, 00:14:50.852 "percent": 18 00:14:50.852 } 00:14:50.852 }, 00:14:50.852 "base_bdevs_list": [ 00:14:50.852 { 00:14:50.852 "name": "spare", 00:14:50.852 "uuid": "9b22b276-e7ab-5f7d-a316-c56602105763", 00:14:50.852 "is_configured": true, 00:14:50.852 "data_offset": 0, 00:14:50.852 "data_size": 65536 00:14:50.852 }, 00:14:50.852 { 00:14:50.852 "name": "BaseBdev2", 00:14:50.852 "uuid": "390f7cf7-d507-549d-a232-6e5cc18ae742", 00:14:50.852 "is_configured": true, 00:14:50.852 "data_offset": 0, 00:14:50.852 "data_size": 65536 00:14:50.852 }, 00:14:50.852 { 00:14:50.852 "name": "BaseBdev3", 00:14:50.852 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:50.852 "is_configured": true, 00:14:50.852 "data_offset": 0, 00:14:50.852 "data_size": 65536 00:14:50.852 }, 00:14:50.852 { 00:14:50.852 "name": "BaseBdev4", 00:14:50.852 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:50.852 "is_configured": true, 00:14:50.852 "data_offset": 0, 00:14:50.852 "data_size": 65536 00:14:50.852 } 00:14:50.852 ] 00:14:50.852 }' 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.852 [2024-11-20 09:27:16.226357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.852 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.852 [2024-11-20 09:27:16.287337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.111 [2024-11-20 09:27:16.454862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:51.111 [2024-11-20 09:27:16.557230] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:51.111 [2024-11-20 09:27:16.557280] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:51.111 [2024-11-20 09:27:16.559066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:51.111 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.111 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:51.111 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:51.111 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.111 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.111 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.111 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.111 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.370 "name": "raid_bdev1", 00:14:51.370 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:51.370 "strip_size_kb": 0, 00:14:51.370 "state": "online", 00:14:51.370 "raid_level": "raid1", 00:14:51.370 "superblock": false, 00:14:51.370 "num_base_bdevs": 4, 00:14:51.370 "num_base_bdevs_discovered": 3, 00:14:51.370 "num_base_bdevs_operational": 3, 00:14:51.370 "process": { 00:14:51.370 "type": "rebuild", 00:14:51.370 "target": "spare", 00:14:51.370 "progress": { 00:14:51.370 "blocks": 16384, 00:14:51.370 "percent": 25 00:14:51.370 } 00:14:51.370 }, 00:14:51.370 "base_bdevs_list": [ 00:14:51.370 { 00:14:51.370 "name": "spare", 00:14:51.370 "uuid": "9b22b276-e7ab-5f7d-a316-c56602105763", 00:14:51.370 "is_configured": true, 00:14:51.370 "data_offset": 0, 00:14:51.370 "data_size": 65536 00:14:51.370 }, 00:14:51.370 { 00:14:51.370 "name": null, 00:14:51.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.370 "is_configured": false, 00:14:51.370 "data_offset": 0, 00:14:51.370 "data_size": 65536 00:14:51.370 }, 00:14:51.370 { 00:14:51.370 "name": "BaseBdev3", 00:14:51.370 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:51.370 "is_configured": true, 00:14:51.370 "data_offset": 0, 00:14:51.370 "data_size": 65536 00:14:51.370 }, 00:14:51.370 { 00:14:51.370 "name": "BaseBdev4", 00:14:51.370 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:51.370 "is_configured": true, 00:14:51.370 "data_offset": 0, 00:14:51.370 "data_size": 65536 00:14:51.370 } 00:14:51.370 ] 00:14:51.370 }' 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=511 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.370 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.370 "name": "raid_bdev1", 00:14:51.370 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:51.370 "strip_size_kb": 0, 00:14:51.370 "state": "online", 00:14:51.370 "raid_level": "raid1", 00:14:51.370 "superblock": false, 00:14:51.370 "num_base_bdevs": 4, 00:14:51.370 "num_base_bdevs_discovered": 3, 00:14:51.370 "num_base_bdevs_operational": 3, 00:14:51.370 "process": { 00:14:51.370 "type": "rebuild", 00:14:51.370 "target": "spare", 00:14:51.371 "progress": { 00:14:51.371 "blocks": 16384, 00:14:51.371 "percent": 25 00:14:51.371 } 00:14:51.371 }, 00:14:51.371 "base_bdevs_list": [ 00:14:51.371 { 00:14:51.371 "name": "spare", 00:14:51.371 "uuid": "9b22b276-e7ab-5f7d-a316-c56602105763", 00:14:51.371 "is_configured": true, 00:14:51.371 "data_offset": 0, 00:14:51.371 "data_size": 65536 00:14:51.371 }, 00:14:51.371 { 00:14:51.371 "name": null, 00:14:51.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.371 "is_configured": false, 00:14:51.371 "data_offset": 0, 00:14:51.371 "data_size": 65536 00:14:51.371 }, 00:14:51.371 { 00:14:51.371 "name": "BaseBdev3", 00:14:51.371 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:51.371 "is_configured": true, 00:14:51.371 "data_offset": 0, 00:14:51.371 "data_size": 65536 00:14:51.371 }, 00:14:51.371 { 00:14:51.371 "name": "BaseBdev4", 00:14:51.371 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:51.371 "is_configured": true, 00:14:51.371 "data_offset": 0, 00:14:51.371 "data_size": 65536 00:14:51.371 } 00:14:51.371 ] 00:14:51.371 }' 00:14:51.371 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.371 132.50 IOPS, 397.50 MiB/s [2024-11-20T09:27:16.827Z] 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.371 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.371 [2024-11-20 09:27:16.813736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:51.371 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.371 09:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.629 [2024-11-20 09:27:16.931153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:51.887 [2024-11-20 09:27:17.268254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:52.151 [2024-11-20 09:27:17.485194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:52.410 [2024-11-20 09:27:17.728614] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:52.410 116.60 IOPS, 349.80 MiB/s [2024-11-20T09:27:17.866Z] 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.410 09:27:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.669 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.669 "name": "raid_bdev1", 00:14:52.669 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:52.669 "strip_size_kb": 0, 00:14:52.669 "state": "online", 00:14:52.669 "raid_level": "raid1", 00:14:52.669 "superblock": false, 00:14:52.669 "num_base_bdevs": 4, 00:14:52.669 "num_base_bdevs_discovered": 3, 00:14:52.669 "num_base_bdevs_operational": 3, 00:14:52.669 "process": { 00:14:52.669 "type": "rebuild", 00:14:52.669 "target": "spare", 00:14:52.669 "progress": { 00:14:52.669 "blocks": 32768, 00:14:52.669 "percent": 50 00:14:52.669 } 00:14:52.669 }, 00:14:52.669 "base_bdevs_list": [ 00:14:52.669 { 00:14:52.669 "name": "spare", 00:14:52.669 "uuid": "9b22b276-e7ab-5f7d-a316-c56602105763", 00:14:52.670 "is_configured": true, 00:14:52.670 "data_offset": 0, 00:14:52.670 "data_size": 65536 00:14:52.670 }, 00:14:52.670 { 00:14:52.670 "name": null, 00:14:52.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.670 "is_configured": false, 00:14:52.670 "data_offset": 0, 00:14:52.670 "data_size": 65536 00:14:52.670 }, 00:14:52.670 { 00:14:52.670 "name": "BaseBdev3", 00:14:52.670 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:52.670 "is_configured": true, 00:14:52.670 "data_offset": 0, 00:14:52.670 "data_size": 65536 00:14:52.670 }, 00:14:52.670 { 00:14:52.670 "name": "BaseBdev4", 00:14:52.670 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:52.670 "is_configured": true, 00:14:52.670 "data_offset": 0, 00:14:52.670 "data_size": 65536 00:14:52.670 } 00:14:52.670 ] 00:14:52.670 }' 00:14:52.670 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.670 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.670 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.670 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.670 09:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.929 [2024-11-20 09:27:18.250044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:53.188 [2024-11-20 09:27:18.583750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:53.706 106.50 IOPS, 319.50 MiB/s [2024-11-20T09:27:19.162Z] [2024-11-20 09:27:18.917308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.706 09:27:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.706 09:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.706 "name": "raid_bdev1", 00:14:53.706 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:53.706 "strip_size_kb": 0, 00:14:53.706 "state": "online", 00:14:53.706 "raid_level": "raid1", 00:14:53.706 "superblock": false, 00:14:53.706 "num_base_bdevs": 4, 00:14:53.706 "num_base_bdevs_discovered": 3, 00:14:53.706 "num_base_bdevs_operational": 3, 00:14:53.706 "process": { 00:14:53.706 "type": "rebuild", 00:14:53.706 "target": "spare", 00:14:53.706 "progress": { 00:14:53.706 "blocks": 51200, 00:14:53.706 "percent": 78 00:14:53.706 } 00:14:53.706 }, 00:14:53.706 "base_bdevs_list": [ 00:14:53.706 { 00:14:53.706 "name": "spare", 00:14:53.706 "uuid": "9b22b276-e7ab-5f7d-a316-c56602105763", 00:14:53.706 "is_configured": true, 00:14:53.706 "data_offset": 0, 00:14:53.706 "data_size": 65536 00:14:53.706 }, 00:14:53.706 { 00:14:53.706 "name": null, 00:14:53.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.706 "is_configured": false, 00:14:53.706 "data_offset": 0, 00:14:53.707 "data_size": 65536 00:14:53.707 }, 00:14:53.707 { 00:14:53.707 "name": "BaseBdev3", 00:14:53.707 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:53.707 "is_configured": true, 00:14:53.707 "data_offset": 0, 00:14:53.707 "data_size": 65536 00:14:53.707 }, 00:14:53.707 { 00:14:53.707 "name": "BaseBdev4", 00:14:53.707 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:53.707 "is_configured": true, 00:14:53.707 "data_offset": 0, 00:14:53.707 "data_size": 65536 00:14:53.707 } 00:14:53.707 ] 00:14:53.707 }' 00:14:53.707 09:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.707 09:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.707 09:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.707 09:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.707 09:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.281 [2024-11-20 09:27:19.454298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:54.540 97.00 IOPS, 291.00 MiB/s [2024-11-20T09:27:19.996Z] [2024-11-20 09:27:19.789909] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:54.540 [2024-11-20 09:27:19.889776] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:54.540 [2024-11-20 09:27:19.892104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.800 "name": "raid_bdev1", 00:14:54.800 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:54.800 "strip_size_kb": 0, 00:14:54.800 "state": "online", 00:14:54.800 "raid_level": "raid1", 00:14:54.800 "superblock": false, 00:14:54.800 "num_base_bdevs": 4, 00:14:54.800 "num_base_bdevs_discovered": 3, 00:14:54.800 "num_base_bdevs_operational": 3, 00:14:54.800 "base_bdevs_list": [ 00:14:54.800 { 00:14:54.800 "name": "spare", 00:14:54.800 "uuid": "9b22b276-e7ab-5f7d-a316-c56602105763", 00:14:54.800 "is_configured": true, 00:14:54.800 "data_offset": 0, 00:14:54.800 "data_size": 65536 00:14:54.800 }, 00:14:54.800 { 00:14:54.800 "name": null, 00:14:54.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.800 "is_configured": false, 00:14:54.800 "data_offset": 0, 00:14:54.800 "data_size": 65536 00:14:54.800 }, 00:14:54.800 { 00:14:54.800 "name": "BaseBdev3", 00:14:54.800 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:54.800 "is_configured": true, 00:14:54.800 "data_offset": 0, 00:14:54.800 "data_size": 65536 00:14:54.800 }, 00:14:54.800 { 00:14:54.800 "name": "BaseBdev4", 00:14:54.800 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:54.800 "is_configured": true, 00:14:54.800 "data_offset": 0, 00:14:54.800 "data_size": 65536 00:14:54.800 } 00:14:54.800 ] 00:14:54.800 }' 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.800 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.060 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.060 "name": "raid_bdev1", 00:14:55.060 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:55.060 "strip_size_kb": 0, 00:14:55.060 "state": "online", 00:14:55.060 "raid_level": "raid1", 00:14:55.060 "superblock": false, 00:14:55.060 "num_base_bdevs": 4, 00:14:55.060 "num_base_bdevs_discovered": 3, 00:14:55.060 "num_base_bdevs_operational": 3, 00:14:55.060 "base_bdevs_list": [ 00:14:55.060 { 00:14:55.060 "name": "spare", 00:14:55.060 "uuid": "9b22b276-e7ab-5f7d-a316-c56602105763", 00:14:55.060 "is_configured": true, 00:14:55.060 "data_offset": 0, 00:14:55.060 "data_size": 65536 00:14:55.060 }, 00:14:55.060 { 00:14:55.060 "name": null, 00:14:55.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.060 "is_configured": false, 00:14:55.060 "data_offset": 0, 00:14:55.060 "data_size": 65536 00:14:55.060 }, 00:14:55.060 { 00:14:55.060 "name": "BaseBdev3", 00:14:55.060 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:55.060 "is_configured": true, 00:14:55.060 "data_offset": 0, 00:14:55.060 "data_size": 65536 00:14:55.060 }, 00:14:55.060 { 00:14:55.060 "name": "BaseBdev4", 00:14:55.060 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:55.060 "is_configured": true, 00:14:55.060 "data_offset": 0, 00:14:55.060 "data_size": 65536 00:14:55.060 } 00:14:55.060 ] 00:14:55.060 }' 00:14:55.060 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.060 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.060 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.060 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.060 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:55.060 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.060 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.061 "name": "raid_bdev1", 00:14:55.061 "uuid": "33b9de62-ec17-4b3f-8992-425bb2e2fc88", 00:14:55.061 "strip_size_kb": 0, 00:14:55.061 "state": "online", 00:14:55.061 "raid_level": "raid1", 00:14:55.061 "superblock": false, 00:14:55.061 "num_base_bdevs": 4, 00:14:55.061 "num_base_bdevs_discovered": 3, 00:14:55.061 "num_base_bdevs_operational": 3, 00:14:55.061 "base_bdevs_list": [ 00:14:55.061 { 00:14:55.061 "name": "spare", 00:14:55.061 "uuid": "9b22b276-e7ab-5f7d-a316-c56602105763", 00:14:55.061 "is_configured": true, 00:14:55.061 "data_offset": 0, 00:14:55.061 "data_size": 65536 00:14:55.061 }, 00:14:55.061 { 00:14:55.061 "name": null, 00:14:55.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.061 "is_configured": false, 00:14:55.061 "data_offset": 0, 00:14:55.061 "data_size": 65536 00:14:55.061 }, 00:14:55.061 { 00:14:55.061 "name": "BaseBdev3", 00:14:55.061 "uuid": "a52f757b-53af-5f3e-9ffe-294175aca231", 00:14:55.061 "is_configured": true, 00:14:55.061 "data_offset": 0, 00:14:55.061 "data_size": 65536 00:14:55.061 }, 00:14:55.061 { 00:14:55.061 "name": "BaseBdev4", 00:14:55.061 "uuid": "82fe4286-ae18-54ab-bf9b-86658608aea8", 00:14:55.061 "is_configured": true, 00:14:55.061 "data_offset": 0, 00:14:55.061 "data_size": 65536 00:14:55.061 } 00:14:55.061 ] 00:14:55.061 }' 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.061 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.578 89.38 IOPS, 268.12 MiB/s [2024-11-20T09:27:21.034Z] 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:55.578 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.578 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.578 [2024-11-20 09:27:20.807353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.578 [2024-11-20 09:27:20.807402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.578 00:14:55.578 Latency(us) 00:14:55.578 [2024-11-20T09:27:21.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.578 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:55.578 raid_bdev1 : 8.18 88.23 264.70 0.00 0.00 16052.75 316.59 111726.00 00:14:55.578 [2024-11-20T09:27:21.034Z] =================================================================================================================== 00:14:55.578 [2024-11-20T09:27:21.034Z] Total : 88.23 264.70 0.00 0.00 16052.75 316.59 111726.00 00:14:55.578 [2024-11-20 09:27:20.922645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.578 [2024-11-20 09:27:20.922701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.578 [2024-11-20 09:27:20.922806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.578 [2024-11-20 09:27:20.922817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:55.578 { 00:14:55.578 "results": [ 00:14:55.578 { 00:14:55.578 "job": "raid_bdev1", 00:14:55.578 "core_mask": "0x1", 00:14:55.578 "workload": "randrw", 00:14:55.578 "percentage": 50, 00:14:55.578 "status": "finished", 00:14:55.578 "queue_depth": 2, 00:14:55.578 "io_size": 3145728, 00:14:55.578 "runtime": 8.182945, 00:14:55.578 "iops": 88.2322928969949, 00:14:55.578 "mibps": 264.6968786909847, 00:14:55.578 "io_failed": 0, 00:14:55.578 "io_timeout": 0, 00:14:55.578 "avg_latency_us": 16052.748653062214, 00:14:55.578 "min_latency_us": 316.5903930131004, 00:14:55.578 "max_latency_us": 111726.00174672488 00:14:55.578 } 00:14:55.578 ], 00:14:55.579 "core_count": 1 00:14:55.579 } 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.579 09:27:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:55.838 /dev/nbd0 00:14:56.097 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:56.097 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.098 1+0 records in 00:14:56.098 1+0 records out 00:14:56.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435249 s, 9.4 MB/s 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.098 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:56.098 /dev/nbd1 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.357 1+0 records in 00:14:56.357 1+0 records out 00:14:56.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521587 s, 7.9 MB/s 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.357 09:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.618 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:56.619 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:56.619 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:56.619 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:56.619 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:56.619 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:56.619 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.619 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:56.878 /dev/nbd1 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.878 1+0 records in 00:14:56.878 1+0 records out 00:14:56.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295972 s, 13.8 MB/s 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:56.878 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.879 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.879 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:57.137 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:57.137 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.137 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:57.137 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.137 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:57.137 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.137 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.396 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79137 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79137 ']' 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79137 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79137 00:14:57.655 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:57.655 killing process with pid 79137 00:14:57.655 Received shutdown signal, test time was about 10.245059 seconds 00:14:57.655 00:14:57.655 Latency(us) 00:14:57.655 [2024-11-20T09:27:23.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.655 [2024-11-20T09:27:23.111Z] =================================================================================================================== 00:14:57.655 [2024-11-20T09:27:23.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.656 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:57.656 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79137' 00:14:57.656 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79137 00:14:57.656 09:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79137 00:14:57.656 [2024-11-20 09:27:22.957254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.224 [2024-11-20 09:27:23.415313] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:59.602 ************************************ 00:14:59.602 END TEST raid_rebuild_test_io 00:14:59.602 ************************************ 00:14:59.602 00:14:59.602 real 0m13.933s 00:14:59.602 user 0m17.576s 00:14:59.602 sys 0m1.933s 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.602 09:27:24 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:59.602 09:27:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:59.602 09:27:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.602 09:27:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.602 ************************************ 00:14:59.602 START TEST raid_rebuild_test_sb_io 00:14:59.602 ************************************ 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:59.602 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79557 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79557 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79557 ']' 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.603 09:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.603 [2024-11-20 09:27:24.882698] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:14:59.603 [2024-11-20 09:27:24.883221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:59.603 Zero copy mechanism will not be used. 00:14:59.603 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79557 ] 00:14:59.861 [2024-11-20 09:27:25.081907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.861 [2024-11-20 09:27:25.206856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.119 [2024-11-20 09:27:25.418673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.119 [2024-11-20 09:27:25.418740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.377 BaseBdev1_malloc 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.377 [2024-11-20 09:27:25.801376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:00.377 [2024-11-20 09:27:25.801528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.377 [2024-11-20 09:27:25.801584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:00.377 [2024-11-20 09:27:25.801623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.377 [2024-11-20 09:27:25.804165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.377 [2024-11-20 09:27:25.804259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:00.377 BaseBdev1 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.377 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.637 BaseBdev2_malloc 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.637 [2024-11-20 09:27:25.858501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:00.637 [2024-11-20 09:27:25.858629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.637 [2024-11-20 09:27:25.858692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:00.637 [2024-11-20 09:27:25.858727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.637 [2024-11-20 09:27:25.861368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.637 [2024-11-20 09:27:25.861475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:00.637 BaseBdev2 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.637 BaseBdev3_malloc 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.637 [2024-11-20 09:27:25.924801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:00.637 [2024-11-20 09:27:25.924919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.637 [2024-11-20 09:27:25.924965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:00.637 [2024-11-20 09:27:25.924978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.637 [2024-11-20 09:27:25.927310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.637 [2024-11-20 09:27:25.927359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:00.637 BaseBdev3 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.637 BaseBdev4_malloc 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.637 [2024-11-20 09:27:25.980458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:00.637 [2024-11-20 09:27:25.980605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.637 [2024-11-20 09:27:25.980650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:00.637 [2024-11-20 09:27:25.980684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.637 [2024-11-20 09:27:25.983264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.637 [2024-11-20 09:27:25.983362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:00.637 BaseBdev4 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.637 09:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.637 spare_malloc 00:15:00.637 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.637 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:00.637 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.637 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.637 spare_delay 00:15:00.637 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.637 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.638 [2024-11-20 09:27:26.051870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:00.638 [2024-11-20 09:27:26.052015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.638 [2024-11-20 09:27:26.052063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:00.638 [2024-11-20 09:27:26.052112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.638 [2024-11-20 09:27:26.054595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.638 [2024-11-20 09:27:26.054677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:00.638 spare 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.638 [2024-11-20 09:27:26.063910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.638 [2024-11-20 09:27:26.066037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.638 [2024-11-20 09:27:26.066163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.638 [2024-11-20 09:27:26.066267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:00.638 [2024-11-20 09:27:26.066534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:00.638 [2024-11-20 09:27:26.066592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:00.638 [2024-11-20 09:27:26.066915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:00.638 [2024-11-20 09:27:26.067159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:00.638 [2024-11-20 09:27:26.067208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:00.638 [2024-11-20 09:27:26.067460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.638 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.896 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.896 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.896 "name": "raid_bdev1", 00:15:00.896 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:00.896 "strip_size_kb": 0, 00:15:00.896 "state": "online", 00:15:00.896 "raid_level": "raid1", 00:15:00.896 "superblock": true, 00:15:00.896 "num_base_bdevs": 4, 00:15:00.896 "num_base_bdevs_discovered": 4, 00:15:00.896 "num_base_bdevs_operational": 4, 00:15:00.896 "base_bdevs_list": [ 00:15:00.896 { 00:15:00.896 "name": "BaseBdev1", 00:15:00.896 "uuid": "faa0c969-e269-5f94-9341-2bd4736b9ca0", 00:15:00.896 "is_configured": true, 00:15:00.897 "data_offset": 2048, 00:15:00.897 "data_size": 63488 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "name": "BaseBdev2", 00:15:00.897 "uuid": "bd7ce073-d62c-5623-aa82-6891a36b117d", 00:15:00.897 "is_configured": true, 00:15:00.897 "data_offset": 2048, 00:15:00.897 "data_size": 63488 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "name": "BaseBdev3", 00:15:00.897 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:00.897 "is_configured": true, 00:15:00.897 "data_offset": 2048, 00:15:00.897 "data_size": 63488 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "name": "BaseBdev4", 00:15:00.897 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:00.897 "is_configured": true, 00:15:00.897 "data_offset": 2048, 00:15:00.897 "data_size": 63488 00:15:00.897 } 00:15:00.897 ] 00:15:00.897 }' 00:15:00.897 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.897 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.155 [2024-11-20 09:27:26.547682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.155 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:01.414 [2024-11-20 09:27:26.619119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.414 "name": "raid_bdev1", 00:15:01.414 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:01.414 "strip_size_kb": 0, 00:15:01.414 "state": "online", 00:15:01.414 "raid_level": "raid1", 00:15:01.414 "superblock": true, 00:15:01.414 "num_base_bdevs": 4, 00:15:01.414 "num_base_bdevs_discovered": 3, 00:15:01.414 "num_base_bdevs_operational": 3, 00:15:01.414 "base_bdevs_list": [ 00:15:01.414 { 00:15:01.414 "name": null, 00:15:01.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.414 "is_configured": false, 00:15:01.414 "data_offset": 0, 00:15:01.414 "data_size": 63488 00:15:01.414 }, 00:15:01.414 { 00:15:01.414 "name": "BaseBdev2", 00:15:01.414 "uuid": "bd7ce073-d62c-5623-aa82-6891a36b117d", 00:15:01.414 "is_configured": true, 00:15:01.414 "data_offset": 2048, 00:15:01.414 "data_size": 63488 00:15:01.414 }, 00:15:01.414 { 00:15:01.414 "name": "BaseBdev3", 00:15:01.414 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:01.414 "is_configured": true, 00:15:01.414 "data_offset": 2048, 00:15:01.414 "data_size": 63488 00:15:01.414 }, 00:15:01.414 { 00:15:01.414 "name": "BaseBdev4", 00:15:01.414 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:01.414 "is_configured": true, 00:15:01.414 "data_offset": 2048, 00:15:01.414 "data_size": 63488 00:15:01.414 } 00:15:01.414 ] 00:15:01.414 }' 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.414 09:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.414 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:01.414 Zero copy mechanism will not be used. 00:15:01.414 Running I/O for 60 seconds... 00:15:01.414 [2024-11-20 09:27:26.728411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:01.672 09:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.672 09:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.672 09:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.672 [2024-11-20 09:27:27.020395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.672 09:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.672 09:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:01.672 [2024-11-20 09:27:27.095816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:01.672 [2024-11-20 09:27:27.098014] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.931 [2024-11-20 09:27:27.216988] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.931 [2024-11-20 09:27:27.218584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:02.189 [2024-11-20 09:27:27.422892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:02.189 [2024-11-20 09:27:27.423245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:02.447 100.00 IOPS, 300.00 MiB/s [2024-11-20T09:27:27.903Z] [2024-11-20 09:27:27.775146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:02.705 [2024-11-20 09:27:27.911751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:02.705 [2024-11-20 09:27:27.912238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.705 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.705 "name": "raid_bdev1", 00:15:02.705 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:02.705 "strip_size_kb": 0, 00:15:02.705 "state": "online", 00:15:02.705 "raid_level": "raid1", 00:15:02.705 "superblock": true, 00:15:02.705 "num_base_bdevs": 4, 00:15:02.705 "num_base_bdevs_discovered": 4, 00:15:02.705 "num_base_bdevs_operational": 4, 00:15:02.705 "process": { 00:15:02.706 "type": "rebuild", 00:15:02.706 "target": "spare", 00:15:02.706 "progress": { 00:15:02.706 "blocks": 12288, 00:15:02.706 "percent": 19 00:15:02.706 } 00:15:02.706 }, 00:15:02.706 "base_bdevs_list": [ 00:15:02.706 { 00:15:02.706 "name": "spare", 00:15:02.706 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:02.706 "is_configured": true, 00:15:02.706 "data_offset": 2048, 00:15:02.706 "data_size": 63488 00:15:02.706 }, 00:15:02.706 { 00:15:02.706 "name": "BaseBdev2", 00:15:02.706 "uuid": "bd7ce073-d62c-5623-aa82-6891a36b117d", 00:15:02.706 "is_configured": true, 00:15:02.706 "data_offset": 2048, 00:15:02.706 "data_size": 63488 00:15:02.706 }, 00:15:02.706 { 00:15:02.706 "name": "BaseBdev3", 00:15:02.706 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:02.706 "is_configured": true, 00:15:02.706 "data_offset": 2048, 00:15:02.706 "data_size": 63488 00:15:02.706 }, 00:15:02.706 { 00:15:02.706 "name": "BaseBdev4", 00:15:02.706 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:02.706 "is_configured": true, 00:15:02.706 "data_offset": 2048, 00:15:02.706 "data_size": 63488 00:15:02.706 } 00:15:02.706 ] 00:15:02.706 }' 00:15:02.706 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.706 [2024-11-20 09:27:28.155205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.964 [2024-11-20 09:27:28.205275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.964 [2024-11-20 09:27:28.310708] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.964 [2024-11-20 09:27:28.323351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.964 [2024-11-20 09:27:28.323577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.964 [2024-11-20 09:27:28.323622] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.964 [2024-11-20 09:27:28.352410] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.964 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.238 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.238 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.238 "name": "raid_bdev1", 00:15:03.238 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:03.238 "strip_size_kb": 0, 00:15:03.238 "state": "online", 00:15:03.238 "raid_level": "raid1", 00:15:03.238 "superblock": true, 00:15:03.238 "num_base_bdevs": 4, 00:15:03.238 "num_base_bdevs_discovered": 3, 00:15:03.238 "num_base_bdevs_operational": 3, 00:15:03.238 "base_bdevs_list": [ 00:15:03.238 { 00:15:03.238 "name": null, 00:15:03.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.238 "is_configured": false, 00:15:03.238 "data_offset": 0, 00:15:03.238 "data_size": 63488 00:15:03.238 }, 00:15:03.238 { 00:15:03.238 "name": "BaseBdev2", 00:15:03.238 "uuid": "bd7ce073-d62c-5623-aa82-6891a36b117d", 00:15:03.238 "is_configured": true, 00:15:03.238 "data_offset": 2048, 00:15:03.238 "data_size": 63488 00:15:03.238 }, 00:15:03.238 { 00:15:03.238 "name": "BaseBdev3", 00:15:03.238 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:03.238 "is_configured": true, 00:15:03.238 "data_offset": 2048, 00:15:03.238 "data_size": 63488 00:15:03.238 }, 00:15:03.238 { 00:15:03.238 "name": "BaseBdev4", 00:15:03.238 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:03.238 "is_configured": true, 00:15:03.238 "data_offset": 2048, 00:15:03.238 "data_size": 63488 00:15:03.238 } 00:15:03.238 ] 00:15:03.238 }' 00:15:03.238 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.238 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.497 124.50 IOPS, 373.50 MiB/s [2024-11-20T09:27:28.953Z] 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.497 "name": "raid_bdev1", 00:15:03.497 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:03.497 "strip_size_kb": 0, 00:15:03.497 "state": "online", 00:15:03.497 "raid_level": "raid1", 00:15:03.497 "superblock": true, 00:15:03.497 "num_base_bdevs": 4, 00:15:03.497 "num_base_bdevs_discovered": 3, 00:15:03.497 "num_base_bdevs_operational": 3, 00:15:03.497 "base_bdevs_list": [ 00:15:03.497 { 00:15:03.497 "name": null, 00:15:03.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.497 "is_configured": false, 00:15:03.497 "data_offset": 0, 00:15:03.497 "data_size": 63488 00:15:03.497 }, 00:15:03.497 { 00:15:03.497 "name": "BaseBdev2", 00:15:03.497 "uuid": "bd7ce073-d62c-5623-aa82-6891a36b117d", 00:15:03.497 "is_configured": true, 00:15:03.497 "data_offset": 2048, 00:15:03.497 "data_size": 63488 00:15:03.497 }, 00:15:03.497 { 00:15:03.497 "name": "BaseBdev3", 00:15:03.497 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:03.497 "is_configured": true, 00:15:03.497 "data_offset": 2048, 00:15:03.497 "data_size": 63488 00:15:03.497 }, 00:15:03.497 { 00:15:03.497 "name": "BaseBdev4", 00:15:03.497 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:03.497 "is_configured": true, 00:15:03.497 "data_offset": 2048, 00:15:03.497 "data_size": 63488 00:15:03.497 } 00:15:03.497 ] 00:15:03.497 }' 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.497 09:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.497 [2024-11-20 09:27:28.943712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.755 09:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.755 09:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:03.755 [2024-11-20 09:27:29.018719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:03.755 [2024-11-20 09:27:29.021095] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.755 [2024-11-20 09:27:29.149628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:03.755 [2024-11-20 09:27:29.150336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:04.013 [2024-11-20 09:27:29.384542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:04.271 [2024-11-20 09:27:29.661903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:04.529 138.00 IOPS, 414.00 MiB/s [2024-11-20T09:27:29.985Z] [2024-11-20 09:27:29.782133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.788 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.788 "name": "raid_bdev1", 00:15:04.788 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:04.788 "strip_size_kb": 0, 00:15:04.788 "state": "online", 00:15:04.788 "raid_level": "raid1", 00:15:04.788 "superblock": true, 00:15:04.788 "num_base_bdevs": 4, 00:15:04.788 "num_base_bdevs_discovered": 4, 00:15:04.788 "num_base_bdevs_operational": 4, 00:15:04.788 "process": { 00:15:04.788 "type": "rebuild", 00:15:04.788 "target": "spare", 00:15:04.788 "progress": { 00:15:04.788 "blocks": 12288, 00:15:04.788 "percent": 19 00:15:04.788 } 00:15:04.788 }, 00:15:04.788 "base_bdevs_list": [ 00:15:04.789 { 00:15:04.789 "name": "spare", 00:15:04.789 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:04.789 "is_configured": true, 00:15:04.789 "data_offset": 2048, 00:15:04.789 "data_size": 63488 00:15:04.789 }, 00:15:04.789 { 00:15:04.789 "name": "BaseBdev2", 00:15:04.789 "uuid": "bd7ce073-d62c-5623-aa82-6891a36b117d", 00:15:04.789 "is_configured": true, 00:15:04.789 "data_offset": 2048, 00:15:04.789 "data_size": 63488 00:15:04.789 }, 00:15:04.789 { 00:15:04.789 "name": "BaseBdev3", 00:15:04.789 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:04.789 "is_configured": true, 00:15:04.789 "data_offset": 2048, 00:15:04.789 "data_size": 63488 00:15:04.789 }, 00:15:04.789 { 00:15:04.789 "name": "BaseBdev4", 00:15:04.789 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:04.789 "is_configured": true, 00:15:04.789 "data_offset": 2048, 00:15:04.789 "data_size": 63488 00:15:04.789 } 00:15:04.789 ] 00:15:04.789 }' 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.789 [2024-11-20 09:27:30.071859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:04.789 [2024-11-20 09:27:30.072575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:04.789 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.789 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.789 [2024-11-20 09:27:30.132966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.789 [2024-11-20 09:27:30.182108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:05.047 [2024-11-20 09:27:30.392071] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:05.047 [2024-11-20 09:27:30.392226] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.047 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.048 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.048 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.048 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.048 "name": "raid_bdev1", 00:15:05.048 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:05.048 "strip_size_kb": 0, 00:15:05.048 "state": "online", 00:15:05.048 "raid_level": "raid1", 00:15:05.048 "superblock": true, 00:15:05.048 "num_base_bdevs": 4, 00:15:05.048 "num_base_bdevs_discovered": 3, 00:15:05.048 "num_base_bdevs_operational": 3, 00:15:05.048 "process": { 00:15:05.048 "type": "rebuild", 00:15:05.048 "target": "spare", 00:15:05.048 "progress": { 00:15:05.048 "blocks": 16384, 00:15:05.048 "percent": 25 00:15:05.048 } 00:15:05.048 }, 00:15:05.048 "base_bdevs_list": [ 00:15:05.048 { 00:15:05.048 "name": "spare", 00:15:05.048 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:05.048 "is_configured": true, 00:15:05.048 "data_offset": 2048, 00:15:05.048 "data_size": 63488 00:15:05.048 }, 00:15:05.048 { 00:15:05.048 "name": null, 00:15:05.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.048 "is_configured": false, 00:15:05.048 "data_offset": 0, 00:15:05.048 "data_size": 63488 00:15:05.048 }, 00:15:05.048 { 00:15:05.048 "name": "BaseBdev3", 00:15:05.048 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:05.048 "is_configured": true, 00:15:05.048 "data_offset": 2048, 00:15:05.048 "data_size": 63488 00:15:05.048 }, 00:15:05.048 { 00:15:05.048 "name": "BaseBdev4", 00:15:05.048 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:05.048 "is_configured": true, 00:15:05.048 "data_offset": 2048, 00:15:05.048 "data_size": 63488 00:15:05.048 } 00:15:05.048 ] 00:15:05.048 }' 00:15:05.048 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=525 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.307 "name": "raid_bdev1", 00:15:05.307 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:05.307 "strip_size_kb": 0, 00:15:05.307 "state": "online", 00:15:05.307 "raid_level": "raid1", 00:15:05.307 "superblock": true, 00:15:05.307 "num_base_bdevs": 4, 00:15:05.307 "num_base_bdevs_discovered": 3, 00:15:05.307 "num_base_bdevs_operational": 3, 00:15:05.307 "process": { 00:15:05.307 "type": "rebuild", 00:15:05.307 "target": "spare", 00:15:05.307 "progress": { 00:15:05.307 "blocks": 18432, 00:15:05.307 "percent": 29 00:15:05.307 } 00:15:05.307 }, 00:15:05.307 "base_bdevs_list": [ 00:15:05.307 { 00:15:05.307 "name": "spare", 00:15:05.307 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:05.307 "is_configured": true, 00:15:05.307 "data_offset": 2048, 00:15:05.307 "data_size": 63488 00:15:05.307 }, 00:15:05.307 { 00:15:05.307 "name": null, 00:15:05.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.307 "is_configured": false, 00:15:05.307 "data_offset": 0, 00:15:05.307 "data_size": 63488 00:15:05.307 }, 00:15:05.307 { 00:15:05.307 "name": "BaseBdev3", 00:15:05.307 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:05.307 "is_configured": true, 00:15:05.307 "data_offset": 2048, 00:15:05.307 "data_size": 63488 00:15:05.307 }, 00:15:05.307 { 00:15:05.307 "name": "BaseBdev4", 00:15:05.307 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:05.307 "is_configured": true, 00:15:05.307 "data_offset": 2048, 00:15:05.307 "data_size": 63488 00:15:05.307 } 00:15:05.307 ] 00:15:05.307 }' 00:15:05.307 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.308 [2024-11-20 09:27:30.650850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:05.308 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.308 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.308 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.308 09:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.877 115.25 IOPS, 345.75 MiB/s [2024-11-20T09:27:31.333Z] [2024-11-20 09:27:31.088706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:05.877 [2024-11-20 09:27:31.291956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:05.877 [2024-11-20 09:27:31.292376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:06.472 [2024-11-20 09:27:31.613396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.472 104.60 IOPS, 313.80 MiB/s [2024-11-20T09:27:31.928Z] 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.472 "name": "raid_bdev1", 00:15:06.472 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:06.472 "strip_size_kb": 0, 00:15:06.472 "state": "online", 00:15:06.472 "raid_level": "raid1", 00:15:06.472 "superblock": true, 00:15:06.472 "num_base_bdevs": 4, 00:15:06.472 "num_base_bdevs_discovered": 3, 00:15:06.472 "num_base_bdevs_operational": 3, 00:15:06.472 "process": { 00:15:06.472 "type": "rebuild", 00:15:06.472 "target": "spare", 00:15:06.472 "progress": { 00:15:06.472 "blocks": 34816, 00:15:06.472 "percent": 54 00:15:06.472 } 00:15:06.472 }, 00:15:06.472 "base_bdevs_list": [ 00:15:06.472 { 00:15:06.472 "name": "spare", 00:15:06.472 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:06.472 "is_configured": true, 00:15:06.472 "data_offset": 2048, 00:15:06.472 "data_size": 63488 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "name": null, 00:15:06.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.472 "is_configured": false, 00:15:06.472 "data_offset": 0, 00:15:06.472 "data_size": 63488 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "name": "BaseBdev3", 00:15:06.472 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:06.472 "is_configured": true, 00:15:06.472 "data_offset": 2048, 00:15:06.472 "data_size": 63488 00:15:06.472 }, 00:15:06.472 { 00:15:06.472 "name": "BaseBdev4", 00:15:06.472 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:06.472 "is_configured": true, 00:15:06.472 "data_offset": 2048, 00:15:06.472 "data_size": 63488 00:15:06.472 } 00:15:06.472 ] 00:15:06.472 }' 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.472 09:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.409 [2024-11-20 09:27:32.613494] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:07.409 94.00 IOPS, 282.00 MiB/s [2024-11-20T09:27:32.865Z] 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.409 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.409 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.409 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.409 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.409 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.668 "name": "raid_bdev1", 00:15:07.668 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:07.668 "strip_size_kb": 0, 00:15:07.668 "state": "online", 00:15:07.668 "raid_level": "raid1", 00:15:07.668 "superblock": true, 00:15:07.668 "num_base_bdevs": 4, 00:15:07.668 "num_base_bdevs_discovered": 3, 00:15:07.668 "num_base_bdevs_operational": 3, 00:15:07.668 "process": { 00:15:07.668 "type": "rebuild", 00:15:07.668 "target": "spare", 00:15:07.668 "progress": { 00:15:07.668 "blocks": 57344, 00:15:07.668 "percent": 90 00:15:07.668 } 00:15:07.668 }, 00:15:07.668 "base_bdevs_list": [ 00:15:07.668 { 00:15:07.668 "name": "spare", 00:15:07.668 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:07.668 "is_configured": true, 00:15:07.668 "data_offset": 2048, 00:15:07.668 "data_size": 63488 00:15:07.668 }, 00:15:07.668 { 00:15:07.668 "name": null, 00:15:07.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.668 "is_configured": false, 00:15:07.668 "data_offset": 0, 00:15:07.668 "data_size": 63488 00:15:07.668 }, 00:15:07.668 { 00:15:07.668 "name": "BaseBdev3", 00:15:07.668 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:07.668 "is_configured": true, 00:15:07.668 "data_offset": 2048, 00:15:07.668 "data_size": 63488 00:15:07.668 }, 00:15:07.668 { 00:15:07.668 "name": "BaseBdev4", 00:15:07.668 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:07.668 "is_configured": true, 00:15:07.668 "data_offset": 2048, 00:15:07.668 "data_size": 63488 00:15:07.668 } 00:15:07.668 ] 00:15:07.668 }' 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.668 [2024-11-20 09:27:32.937626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.668 09:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.928 [2024-11-20 09:27:33.264528] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:07.928 [2024-11-20 09:27:33.370548] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:07.928 [2024-11-20 09:27:33.375190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.756 84.14 IOPS, 252.43 MiB/s [2024-11-20T09:27:34.212Z] 09:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.756 09:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.756 09:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.756 09:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.756 09:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.756 09:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.756 09:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.756 09:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.756 "name": "raid_bdev1", 00:15:08.756 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:08.756 "strip_size_kb": 0, 00:15:08.756 "state": "online", 00:15:08.756 "raid_level": "raid1", 00:15:08.756 "superblock": true, 00:15:08.756 "num_base_bdevs": 4, 00:15:08.756 "num_base_bdevs_discovered": 3, 00:15:08.756 "num_base_bdevs_operational": 3, 00:15:08.756 "base_bdevs_list": [ 00:15:08.756 { 00:15:08.756 "name": "spare", 00:15:08.756 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:08.756 "is_configured": true, 00:15:08.756 "data_offset": 2048, 00:15:08.756 "data_size": 63488 00:15:08.756 }, 00:15:08.756 { 00:15:08.756 "name": null, 00:15:08.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.756 "is_configured": false, 00:15:08.756 "data_offset": 0, 00:15:08.756 "data_size": 63488 00:15:08.756 }, 00:15:08.756 { 00:15:08.756 "name": "BaseBdev3", 00:15:08.756 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:08.756 "is_configured": true, 00:15:08.756 "data_offset": 2048, 00:15:08.756 "data_size": 63488 00:15:08.756 }, 00:15:08.756 { 00:15:08.756 "name": "BaseBdev4", 00:15:08.756 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:08.756 "is_configured": true, 00:15:08.756 "data_offset": 2048, 00:15:08.756 "data_size": 63488 00:15:08.756 } 00:15:08.756 ] 00:15:08.756 }' 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.756 "name": "raid_bdev1", 00:15:08.756 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:08.756 "strip_size_kb": 0, 00:15:08.756 "state": "online", 00:15:08.756 "raid_level": "raid1", 00:15:08.756 "superblock": true, 00:15:08.756 "num_base_bdevs": 4, 00:15:08.756 "num_base_bdevs_discovered": 3, 00:15:08.756 "num_base_bdevs_operational": 3, 00:15:08.756 "base_bdevs_list": [ 00:15:08.756 { 00:15:08.756 "name": "spare", 00:15:08.756 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:08.756 "is_configured": true, 00:15:08.756 "data_offset": 2048, 00:15:08.756 "data_size": 63488 00:15:08.756 }, 00:15:08.756 { 00:15:08.756 "name": null, 00:15:08.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.756 "is_configured": false, 00:15:08.756 "data_offset": 0, 00:15:08.756 "data_size": 63488 00:15:08.756 }, 00:15:08.756 { 00:15:08.756 "name": "BaseBdev3", 00:15:08.756 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:08.756 "is_configured": true, 00:15:08.756 "data_offset": 2048, 00:15:08.756 "data_size": 63488 00:15:08.756 }, 00:15:08.756 { 00:15:08.756 "name": "BaseBdev4", 00:15:08.756 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:08.756 "is_configured": true, 00:15:08.756 "data_offset": 2048, 00:15:08.756 "data_size": 63488 00:15:08.756 } 00:15:08.756 ] 00:15:08.756 }' 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.756 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.015 "name": "raid_bdev1", 00:15:09.015 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:09.015 "strip_size_kb": 0, 00:15:09.015 "state": "online", 00:15:09.015 "raid_level": "raid1", 00:15:09.015 "superblock": true, 00:15:09.015 "num_base_bdevs": 4, 00:15:09.015 "num_base_bdevs_discovered": 3, 00:15:09.015 "num_base_bdevs_operational": 3, 00:15:09.015 "base_bdevs_list": [ 00:15:09.015 { 00:15:09.015 "name": "spare", 00:15:09.015 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:09.015 "is_configured": true, 00:15:09.015 "data_offset": 2048, 00:15:09.015 "data_size": 63488 00:15:09.015 }, 00:15:09.015 { 00:15:09.015 "name": null, 00:15:09.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.015 "is_configured": false, 00:15:09.015 "data_offset": 0, 00:15:09.015 "data_size": 63488 00:15:09.015 }, 00:15:09.015 { 00:15:09.015 "name": "BaseBdev3", 00:15:09.015 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:09.015 "is_configured": true, 00:15:09.015 "data_offset": 2048, 00:15:09.015 "data_size": 63488 00:15:09.015 }, 00:15:09.015 { 00:15:09.015 "name": "BaseBdev4", 00:15:09.015 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:09.015 "is_configured": true, 00:15:09.015 "data_offset": 2048, 00:15:09.015 "data_size": 63488 00:15:09.015 } 00:15:09.015 ] 00:15:09.015 }' 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.015 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.274 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.274 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.274 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.274 [2024-11-20 09:27:34.674072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.274 [2024-11-20 09:27:34.674181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.274 00:15:09.274 Latency(us) 00:15:09.274 [2024-11-20T09:27:34.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.274 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:09.274 raid_bdev1 : 7.97 78.63 235.88 0.00 0.00 17394.82 354.15 120883.87 00:15:09.274 [2024-11-20T09:27:34.730Z] =================================================================================================================== 00:15:09.274 [2024-11-20T09:27:34.730Z] Total : 78.63 235.88 0.00 0.00 17394.82 354.15 120883.87 00:15:09.274 [2024-11-20 09:27:34.717722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.274 [2024-11-20 09:27:34.717856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.274 [2024-11-20 09:27:34.718008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.274 [2024-11-20 09:27:34.718080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:09.274 { 00:15:09.274 "results": [ 00:15:09.274 { 00:15:09.274 "job": "raid_bdev1", 00:15:09.274 "core_mask": "0x1", 00:15:09.274 "workload": "randrw", 00:15:09.274 "percentage": 50, 00:15:09.274 "status": "finished", 00:15:09.274 "queue_depth": 2, 00:15:09.274 "io_size": 3145728, 00:15:09.274 "runtime": 7.974513, 00:15:09.274 "iops": 78.62549098609533, 00:15:09.274 "mibps": 235.876472958286, 00:15:09.274 "io_failed": 0, 00:15:09.274 "io_timeout": 0, 00:15:09.274 "avg_latency_us": 17394.821927386947, 00:15:09.274 "min_latency_us": 354.15196506550217, 00:15:09.274 "max_latency_us": 120883.87074235808 00:15:09.274 } 00:15:09.274 ], 00:15:09.274 "core_count": 1 00:15:09.274 } 00:15:09.274 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.274 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.274 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.274 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.532 09:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:09.790 /dev/nbd0 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.790 1+0 records in 00:15:09.790 1+0 records out 00:15:09.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338654 s, 12.1 MB/s 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:09.790 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.791 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:09.791 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.791 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.791 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:10.049 /dev/nbd1 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.049 1+0 records in 00:15:10.049 1+0 records out 00:15:10.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361036 s, 11.3 MB/s 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.049 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:10.307 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:10.307 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.307 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:10.307 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.307 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:10.307 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.307 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.566 09:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:10.824 /dev/nbd1 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.824 1+0 records in 00:15:10.824 1+0 records out 00:15:10.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260265 s, 15.7 MB/s 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.824 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.082 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:11.083 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.083 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.341 [2024-11-20 09:27:36.723827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:11.341 [2024-11-20 09:27:36.723961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.341 [2024-11-20 09:27:36.724017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:11.341 [2024-11-20 09:27:36.724052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.341 [2024-11-20 09:27:36.726660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.341 [2024-11-20 09:27:36.726759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:11.341 [2024-11-20 09:27:36.726918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:11.341 [2024-11-20 09:27:36.727012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.341 [2024-11-20 09:27:36.727249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:11.341 [2024-11-20 09:27:36.727423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:11.341 spare 00:15:11.341 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.342 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:11.342 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.342 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.599 [2024-11-20 09:27:36.827416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:11.599 [2024-11-20 09:27:36.827569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:11.599 [2024-11-20 09:27:36.828034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:11.599 [2024-11-20 09:27:36.828302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:11.599 [2024-11-20 09:27:36.828356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:11.599 [2024-11-20 09:27:36.828686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.599 "name": "raid_bdev1", 00:15:11.599 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:11.599 "strip_size_kb": 0, 00:15:11.599 "state": "online", 00:15:11.599 "raid_level": "raid1", 00:15:11.599 "superblock": true, 00:15:11.599 "num_base_bdevs": 4, 00:15:11.599 "num_base_bdevs_discovered": 3, 00:15:11.599 "num_base_bdevs_operational": 3, 00:15:11.599 "base_bdevs_list": [ 00:15:11.599 { 00:15:11.599 "name": "spare", 00:15:11.599 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:11.599 "is_configured": true, 00:15:11.599 "data_offset": 2048, 00:15:11.599 "data_size": 63488 00:15:11.599 }, 00:15:11.599 { 00:15:11.599 "name": null, 00:15:11.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.599 "is_configured": false, 00:15:11.599 "data_offset": 2048, 00:15:11.599 "data_size": 63488 00:15:11.599 }, 00:15:11.599 { 00:15:11.599 "name": "BaseBdev3", 00:15:11.599 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:11.599 "is_configured": true, 00:15:11.599 "data_offset": 2048, 00:15:11.599 "data_size": 63488 00:15:11.599 }, 00:15:11.599 { 00:15:11.599 "name": "BaseBdev4", 00:15:11.599 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:11.599 "is_configured": true, 00:15:11.599 "data_offset": 2048, 00:15:11.599 "data_size": 63488 00:15:11.599 } 00:15:11.599 ] 00:15:11.599 }' 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.599 09:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.165 "name": "raid_bdev1", 00:15:12.165 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:12.165 "strip_size_kb": 0, 00:15:12.165 "state": "online", 00:15:12.165 "raid_level": "raid1", 00:15:12.165 "superblock": true, 00:15:12.165 "num_base_bdevs": 4, 00:15:12.165 "num_base_bdevs_discovered": 3, 00:15:12.165 "num_base_bdevs_operational": 3, 00:15:12.165 "base_bdevs_list": [ 00:15:12.165 { 00:15:12.165 "name": "spare", 00:15:12.165 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:12.165 "is_configured": true, 00:15:12.165 "data_offset": 2048, 00:15:12.165 "data_size": 63488 00:15:12.165 }, 00:15:12.165 { 00:15:12.165 "name": null, 00:15:12.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.165 "is_configured": false, 00:15:12.165 "data_offset": 2048, 00:15:12.165 "data_size": 63488 00:15:12.165 }, 00:15:12.165 { 00:15:12.165 "name": "BaseBdev3", 00:15:12.165 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:12.165 "is_configured": true, 00:15:12.165 "data_offset": 2048, 00:15:12.165 "data_size": 63488 00:15:12.165 }, 00:15:12.165 { 00:15:12.165 "name": "BaseBdev4", 00:15:12.165 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:12.165 "is_configured": true, 00:15:12.165 "data_offset": 2048, 00:15:12.165 "data_size": 63488 00:15:12.165 } 00:15:12.165 ] 00:15:12.165 }' 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.165 [2024-11-20 09:27:37.527583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.165 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.165 "name": "raid_bdev1", 00:15:12.165 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:12.165 "strip_size_kb": 0, 00:15:12.165 "state": "online", 00:15:12.165 "raid_level": "raid1", 00:15:12.165 "superblock": true, 00:15:12.165 "num_base_bdevs": 4, 00:15:12.165 "num_base_bdevs_discovered": 2, 00:15:12.165 "num_base_bdevs_operational": 2, 00:15:12.165 "base_bdevs_list": [ 00:15:12.165 { 00:15:12.165 "name": null, 00:15:12.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.165 "is_configured": false, 00:15:12.165 "data_offset": 0, 00:15:12.165 "data_size": 63488 00:15:12.165 }, 00:15:12.166 { 00:15:12.166 "name": null, 00:15:12.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.166 "is_configured": false, 00:15:12.166 "data_offset": 2048, 00:15:12.166 "data_size": 63488 00:15:12.166 }, 00:15:12.166 { 00:15:12.166 "name": "BaseBdev3", 00:15:12.166 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:12.166 "is_configured": true, 00:15:12.166 "data_offset": 2048, 00:15:12.166 "data_size": 63488 00:15:12.166 }, 00:15:12.166 { 00:15:12.166 "name": "BaseBdev4", 00:15:12.166 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:12.166 "is_configured": true, 00:15:12.166 "data_offset": 2048, 00:15:12.166 "data_size": 63488 00:15:12.166 } 00:15:12.166 ] 00:15:12.166 }' 00:15:12.166 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.166 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.743 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.743 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.743 09:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.743 [2024-11-20 09:27:37.990980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.743 [2024-11-20 09:27:37.991205] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:12.743 [2024-11-20 09:27:37.991222] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:12.744 [2024-11-20 09:27:37.991281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.744 [2024-11-20 09:27:38.009409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:12.744 09:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.744 09:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:12.744 [2024-11-20 09:27:38.011664] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.684 "name": "raid_bdev1", 00:15:13.684 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:13.684 "strip_size_kb": 0, 00:15:13.684 "state": "online", 00:15:13.684 "raid_level": "raid1", 00:15:13.684 "superblock": true, 00:15:13.684 "num_base_bdevs": 4, 00:15:13.684 "num_base_bdevs_discovered": 3, 00:15:13.684 "num_base_bdevs_operational": 3, 00:15:13.684 "process": { 00:15:13.684 "type": "rebuild", 00:15:13.684 "target": "spare", 00:15:13.684 "progress": { 00:15:13.684 "blocks": 20480, 00:15:13.684 "percent": 32 00:15:13.684 } 00:15:13.684 }, 00:15:13.684 "base_bdevs_list": [ 00:15:13.684 { 00:15:13.684 "name": "spare", 00:15:13.684 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:13.684 "is_configured": true, 00:15:13.684 "data_offset": 2048, 00:15:13.684 "data_size": 63488 00:15:13.684 }, 00:15:13.684 { 00:15:13.684 "name": null, 00:15:13.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.684 "is_configured": false, 00:15:13.684 "data_offset": 2048, 00:15:13.684 "data_size": 63488 00:15:13.684 }, 00:15:13.684 { 00:15:13.684 "name": "BaseBdev3", 00:15:13.684 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:13.684 "is_configured": true, 00:15:13.684 "data_offset": 2048, 00:15:13.684 "data_size": 63488 00:15:13.684 }, 00:15:13.684 { 00:15:13.684 "name": "BaseBdev4", 00:15:13.684 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:13.684 "is_configured": true, 00:15:13.684 "data_offset": 2048, 00:15:13.684 "data_size": 63488 00:15:13.684 } 00:15:13.684 ] 00:15:13.684 }' 00:15:13.684 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.685 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.685 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.943 [2024-11-20 09:27:39.170822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.943 [2024-11-20 09:27:39.217811] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.943 [2024-11-20 09:27:39.217998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.943 [2024-11-20 09:27:39.218042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.943 [2024-11-20 09:27:39.218065] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.943 "name": "raid_bdev1", 00:15:13.943 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:13.943 "strip_size_kb": 0, 00:15:13.943 "state": "online", 00:15:13.943 "raid_level": "raid1", 00:15:13.943 "superblock": true, 00:15:13.943 "num_base_bdevs": 4, 00:15:13.943 "num_base_bdevs_discovered": 2, 00:15:13.943 "num_base_bdevs_operational": 2, 00:15:13.943 "base_bdevs_list": [ 00:15:13.943 { 00:15:13.943 "name": null, 00:15:13.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.943 "is_configured": false, 00:15:13.943 "data_offset": 0, 00:15:13.943 "data_size": 63488 00:15:13.943 }, 00:15:13.943 { 00:15:13.943 "name": null, 00:15:13.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.943 "is_configured": false, 00:15:13.943 "data_offset": 2048, 00:15:13.943 "data_size": 63488 00:15:13.943 }, 00:15:13.943 { 00:15:13.943 "name": "BaseBdev3", 00:15:13.943 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:13.943 "is_configured": true, 00:15:13.943 "data_offset": 2048, 00:15:13.943 "data_size": 63488 00:15:13.943 }, 00:15:13.943 { 00:15:13.943 "name": "BaseBdev4", 00:15:13.943 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:13.943 "is_configured": true, 00:15:13.943 "data_offset": 2048, 00:15:13.943 "data_size": 63488 00:15:13.943 } 00:15:13.943 ] 00:15:13.943 }' 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.943 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.511 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.511 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.511 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.511 [2024-11-20 09:27:39.720609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.511 [2024-11-20 09:27:39.720756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.511 [2024-11-20 09:27:39.720811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:14.511 [2024-11-20 09:27:39.720823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.511 [2024-11-20 09:27:39.721384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.511 [2024-11-20 09:27:39.721415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.511 [2024-11-20 09:27:39.721556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:14.511 [2024-11-20 09:27:39.721637] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:14.511 [2024-11-20 09:27:39.721658] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:14.511 [2024-11-20 09:27:39.721692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.511 spare 00:15:14.511 [2024-11-20 09:27:39.739916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:14.511 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.511 09:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:14.511 [2024-11-20 09:27:39.742111] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.448 "name": "raid_bdev1", 00:15:15.448 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:15.448 "strip_size_kb": 0, 00:15:15.448 "state": "online", 00:15:15.448 "raid_level": "raid1", 00:15:15.448 "superblock": true, 00:15:15.448 "num_base_bdevs": 4, 00:15:15.448 "num_base_bdevs_discovered": 3, 00:15:15.448 "num_base_bdevs_operational": 3, 00:15:15.448 "process": { 00:15:15.448 "type": "rebuild", 00:15:15.448 "target": "spare", 00:15:15.448 "progress": { 00:15:15.448 "blocks": 20480, 00:15:15.448 "percent": 32 00:15:15.448 } 00:15:15.448 }, 00:15:15.448 "base_bdevs_list": [ 00:15:15.448 { 00:15:15.448 "name": "spare", 00:15:15.448 "uuid": "3ebcbb66-37e9-547c-ab3a-c3abe21ba6be", 00:15:15.448 "is_configured": true, 00:15:15.448 "data_offset": 2048, 00:15:15.448 "data_size": 63488 00:15:15.448 }, 00:15:15.448 { 00:15:15.448 "name": null, 00:15:15.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.448 "is_configured": false, 00:15:15.448 "data_offset": 2048, 00:15:15.448 "data_size": 63488 00:15:15.448 }, 00:15:15.448 { 00:15:15.448 "name": "BaseBdev3", 00:15:15.448 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:15.448 "is_configured": true, 00:15:15.448 "data_offset": 2048, 00:15:15.448 "data_size": 63488 00:15:15.448 }, 00:15:15.448 { 00:15:15.448 "name": "BaseBdev4", 00:15:15.448 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:15.448 "is_configured": true, 00:15:15.448 "data_offset": 2048, 00:15:15.448 "data_size": 63488 00:15:15.448 } 00:15:15.448 ] 00:15:15.448 }' 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.448 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.448 [2024-11-20 09:27:40.885519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.706 [2024-11-20 09:27:40.948422] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:15.706 [2024-11-20 09:27:40.948630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.706 [2024-11-20 09:27:40.948652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.706 [2024-11-20 09:27:40.948668] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:15.706 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.706 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:15.706 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.706 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.706 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.706 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.706 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.706 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.706 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.707 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.707 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.707 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.707 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.707 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.707 09:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.707 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.707 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.707 "name": "raid_bdev1", 00:15:15.707 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:15.707 "strip_size_kb": 0, 00:15:15.707 "state": "online", 00:15:15.707 "raid_level": "raid1", 00:15:15.707 "superblock": true, 00:15:15.707 "num_base_bdevs": 4, 00:15:15.707 "num_base_bdevs_discovered": 2, 00:15:15.707 "num_base_bdevs_operational": 2, 00:15:15.707 "base_bdevs_list": [ 00:15:15.707 { 00:15:15.707 "name": null, 00:15:15.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.707 "is_configured": false, 00:15:15.707 "data_offset": 0, 00:15:15.707 "data_size": 63488 00:15:15.707 }, 00:15:15.707 { 00:15:15.707 "name": null, 00:15:15.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.707 "is_configured": false, 00:15:15.707 "data_offset": 2048, 00:15:15.707 "data_size": 63488 00:15:15.707 }, 00:15:15.707 { 00:15:15.707 "name": "BaseBdev3", 00:15:15.707 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:15.707 "is_configured": true, 00:15:15.707 "data_offset": 2048, 00:15:15.707 "data_size": 63488 00:15:15.707 }, 00:15:15.707 { 00:15:15.707 "name": "BaseBdev4", 00:15:15.707 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:15.707 "is_configured": true, 00:15:15.707 "data_offset": 2048, 00:15:15.707 "data_size": 63488 00:15:15.707 } 00:15:15.707 ] 00:15:15.707 }' 00:15:15.707 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.707 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.274 "name": "raid_bdev1", 00:15:16.274 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:16.274 "strip_size_kb": 0, 00:15:16.274 "state": "online", 00:15:16.274 "raid_level": "raid1", 00:15:16.274 "superblock": true, 00:15:16.274 "num_base_bdevs": 4, 00:15:16.274 "num_base_bdevs_discovered": 2, 00:15:16.274 "num_base_bdevs_operational": 2, 00:15:16.274 "base_bdevs_list": [ 00:15:16.274 { 00:15:16.274 "name": null, 00:15:16.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.274 "is_configured": false, 00:15:16.274 "data_offset": 0, 00:15:16.274 "data_size": 63488 00:15:16.274 }, 00:15:16.274 { 00:15:16.274 "name": null, 00:15:16.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.274 "is_configured": false, 00:15:16.274 "data_offset": 2048, 00:15:16.274 "data_size": 63488 00:15:16.274 }, 00:15:16.274 { 00:15:16.274 "name": "BaseBdev3", 00:15:16.274 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:16.274 "is_configured": true, 00:15:16.274 "data_offset": 2048, 00:15:16.274 "data_size": 63488 00:15:16.274 }, 00:15:16.274 { 00:15:16.274 "name": "BaseBdev4", 00:15:16.274 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:16.274 "is_configured": true, 00:15:16.274 "data_offset": 2048, 00:15:16.274 "data_size": 63488 00:15:16.274 } 00:15:16.274 ] 00:15:16.274 }' 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.274 [2024-11-20 09:27:41.585580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:16.274 [2024-11-20 09:27:41.585763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.274 [2024-11-20 09:27:41.585806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:16.274 [2024-11-20 09:27:41.585842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.274 [2024-11-20 09:27:41.586403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.274 [2024-11-20 09:27:41.586499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:16.274 [2024-11-20 09:27:41.586642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:16.274 [2024-11-20 09:27:41.586698] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:16.274 [2024-11-20 09:27:41.586745] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:16.274 [2024-11-20 09:27:41.586814] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:16.274 BaseBdev1 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.274 09:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.210 "name": "raid_bdev1", 00:15:17.210 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:17.210 "strip_size_kb": 0, 00:15:17.210 "state": "online", 00:15:17.210 "raid_level": "raid1", 00:15:17.210 "superblock": true, 00:15:17.210 "num_base_bdevs": 4, 00:15:17.210 "num_base_bdevs_discovered": 2, 00:15:17.210 "num_base_bdevs_operational": 2, 00:15:17.210 "base_bdevs_list": [ 00:15:17.210 { 00:15:17.210 "name": null, 00:15:17.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.210 "is_configured": false, 00:15:17.210 "data_offset": 0, 00:15:17.210 "data_size": 63488 00:15:17.210 }, 00:15:17.210 { 00:15:17.210 "name": null, 00:15:17.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.210 "is_configured": false, 00:15:17.210 "data_offset": 2048, 00:15:17.210 "data_size": 63488 00:15:17.210 }, 00:15:17.210 { 00:15:17.210 "name": "BaseBdev3", 00:15:17.210 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:17.210 "is_configured": true, 00:15:17.210 "data_offset": 2048, 00:15:17.210 "data_size": 63488 00:15:17.210 }, 00:15:17.210 { 00:15:17.210 "name": "BaseBdev4", 00:15:17.210 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:17.210 "is_configured": true, 00:15:17.210 "data_offset": 2048, 00:15:17.210 "data_size": 63488 00:15:17.210 } 00:15:17.210 ] 00:15:17.210 }' 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.210 09:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.777 "name": "raid_bdev1", 00:15:17.777 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:17.777 "strip_size_kb": 0, 00:15:17.777 "state": "online", 00:15:17.777 "raid_level": "raid1", 00:15:17.777 "superblock": true, 00:15:17.777 "num_base_bdevs": 4, 00:15:17.777 "num_base_bdevs_discovered": 2, 00:15:17.777 "num_base_bdevs_operational": 2, 00:15:17.777 "base_bdevs_list": [ 00:15:17.777 { 00:15:17.777 "name": null, 00:15:17.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.777 "is_configured": false, 00:15:17.777 "data_offset": 0, 00:15:17.777 "data_size": 63488 00:15:17.777 }, 00:15:17.777 { 00:15:17.777 "name": null, 00:15:17.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.777 "is_configured": false, 00:15:17.777 "data_offset": 2048, 00:15:17.777 "data_size": 63488 00:15:17.777 }, 00:15:17.777 { 00:15:17.777 "name": "BaseBdev3", 00:15:17.777 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:17.777 "is_configured": true, 00:15:17.777 "data_offset": 2048, 00:15:17.777 "data_size": 63488 00:15:17.777 }, 00:15:17.777 { 00:15:17.777 "name": "BaseBdev4", 00:15:17.777 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:17.777 "is_configured": true, 00:15:17.777 "data_offset": 2048, 00:15:17.777 "data_size": 63488 00:15:17.777 } 00:15:17.777 ] 00:15:17.777 }' 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.777 [2024-11-20 09:27:43.171531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.777 [2024-11-20 09:27:43.171800] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:17.777 [2024-11-20 09:27:43.171895] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:17.777 request: 00:15:17.777 { 00:15:17.777 "base_bdev": "BaseBdev1", 00:15:17.777 "raid_bdev": "raid_bdev1", 00:15:17.777 "method": "bdev_raid_add_base_bdev", 00:15:17.777 "req_id": 1 00:15:17.777 } 00:15:17.777 Got JSON-RPC error response 00:15:17.777 response: 00:15:17.777 { 00:15:17.777 "code": -22, 00:15:17.777 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:17.777 } 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.777 09:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.159 "name": "raid_bdev1", 00:15:19.159 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:19.159 "strip_size_kb": 0, 00:15:19.159 "state": "online", 00:15:19.159 "raid_level": "raid1", 00:15:19.159 "superblock": true, 00:15:19.159 "num_base_bdevs": 4, 00:15:19.159 "num_base_bdevs_discovered": 2, 00:15:19.159 "num_base_bdevs_operational": 2, 00:15:19.159 "base_bdevs_list": [ 00:15:19.159 { 00:15:19.159 "name": null, 00:15:19.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.159 "is_configured": false, 00:15:19.159 "data_offset": 0, 00:15:19.159 "data_size": 63488 00:15:19.159 }, 00:15:19.159 { 00:15:19.159 "name": null, 00:15:19.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.159 "is_configured": false, 00:15:19.159 "data_offset": 2048, 00:15:19.159 "data_size": 63488 00:15:19.159 }, 00:15:19.159 { 00:15:19.159 "name": "BaseBdev3", 00:15:19.159 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:19.159 "is_configured": true, 00:15:19.159 "data_offset": 2048, 00:15:19.159 "data_size": 63488 00:15:19.159 }, 00:15:19.159 { 00:15:19.159 "name": "BaseBdev4", 00:15:19.159 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:19.159 "is_configured": true, 00:15:19.159 "data_offset": 2048, 00:15:19.159 "data_size": 63488 00:15:19.159 } 00:15:19.159 ] 00:15:19.159 }' 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.159 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.418 "name": "raid_bdev1", 00:15:19.418 "uuid": "821829c4-e7af-49b7-bd49-ac1b5ad882ab", 00:15:19.418 "strip_size_kb": 0, 00:15:19.418 "state": "online", 00:15:19.418 "raid_level": "raid1", 00:15:19.418 "superblock": true, 00:15:19.418 "num_base_bdevs": 4, 00:15:19.418 "num_base_bdevs_discovered": 2, 00:15:19.418 "num_base_bdevs_operational": 2, 00:15:19.418 "base_bdevs_list": [ 00:15:19.418 { 00:15:19.418 "name": null, 00:15:19.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.418 "is_configured": false, 00:15:19.418 "data_offset": 0, 00:15:19.418 "data_size": 63488 00:15:19.418 }, 00:15:19.418 { 00:15:19.418 "name": null, 00:15:19.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.418 "is_configured": false, 00:15:19.418 "data_offset": 2048, 00:15:19.418 "data_size": 63488 00:15:19.418 }, 00:15:19.418 { 00:15:19.418 "name": "BaseBdev3", 00:15:19.418 "uuid": "cc314367-7384-554e-8585-317e3d0f7317", 00:15:19.418 "is_configured": true, 00:15:19.418 "data_offset": 2048, 00:15:19.418 "data_size": 63488 00:15:19.418 }, 00:15:19.418 { 00:15:19.418 "name": "BaseBdev4", 00:15:19.418 "uuid": "29db7037-1473-58c4-b0c3-06225bd33f2d", 00:15:19.418 "is_configured": true, 00:15:19.418 "data_offset": 2048, 00:15:19.418 "data_size": 63488 00:15:19.418 } 00:15:19.418 ] 00:15:19.418 }' 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79557 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79557 ']' 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79557 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79557 00:15:19.418 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.418 killing process with pid 79557 00:15:19.418 Received shutdown signal, test time was about 18.133477 seconds 00:15:19.418 00:15:19.418 Latency(us) 00:15:19.418 [2024-11-20T09:27:44.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.419 [2024-11-20T09:27:44.875Z] =================================================================================================================== 00:15:19.419 [2024-11-20T09:27:44.875Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.419 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.419 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79557' 00:15:19.419 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79557 00:15:19.419 [2024-11-20 09:27:44.829342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.419 [2024-11-20 09:27:44.829503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.419 09:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79557 00:15:19.419 [2024-11-20 09:27:44.829602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.419 [2024-11-20 09:27:44.829616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:19.987 [2024-11-20 09:27:45.332699] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.363 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:21.363 00:15:21.363 real 0m21.963s 00:15:21.363 user 0m28.528s 00:15:21.363 sys 0m2.730s 00:15:21.363 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.363 ************************************ 00:15:21.363 END TEST raid_rebuild_test_sb_io 00:15:21.363 ************************************ 00:15:21.364 09:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.364 09:27:46 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:21.364 09:27:46 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:21.364 09:27:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:21.364 09:27:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.364 09:27:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.364 ************************************ 00:15:21.364 START TEST raid5f_state_function_test 00:15:21.364 ************************************ 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80281 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80281' 00:15:21.364 Process raid pid: 80281 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80281 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80281 ']' 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.364 09:27:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.623 [2024-11-20 09:27:46.887655] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:15:21.623 [2024-11-20 09:27:46.887800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.623 [2024-11-20 09:27:47.063204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.882 [2024-11-20 09:27:47.202800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.141 [2024-11-20 09:27:47.454862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.141 [2024-11-20 09:27:47.454915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.400 [2024-11-20 09:27:47.821373] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.400 [2024-11-20 09:27:47.821577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.400 [2024-11-20 09:27:47.821629] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.400 [2024-11-20 09:27:47.821677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.400 [2024-11-20 09:27:47.821718] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.400 [2024-11-20 09:27:47.821747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.400 09:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.659 09:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.659 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.659 "name": "Existed_Raid", 00:15:22.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.659 "strip_size_kb": 64, 00:15:22.659 "state": "configuring", 00:15:22.659 "raid_level": "raid5f", 00:15:22.659 "superblock": false, 00:15:22.659 "num_base_bdevs": 3, 00:15:22.659 "num_base_bdevs_discovered": 0, 00:15:22.660 "num_base_bdevs_operational": 3, 00:15:22.660 "base_bdevs_list": [ 00:15:22.660 { 00:15:22.660 "name": "BaseBdev1", 00:15:22.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.660 "is_configured": false, 00:15:22.660 "data_offset": 0, 00:15:22.660 "data_size": 0 00:15:22.660 }, 00:15:22.660 { 00:15:22.660 "name": "BaseBdev2", 00:15:22.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.660 "is_configured": false, 00:15:22.660 "data_offset": 0, 00:15:22.660 "data_size": 0 00:15:22.660 }, 00:15:22.660 { 00:15:22.660 "name": "BaseBdev3", 00:15:22.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.660 "is_configured": false, 00:15:22.660 "data_offset": 0, 00:15:22.660 "data_size": 0 00:15:22.660 } 00:15:22.660 ] 00:15:22.660 }' 00:15:22.660 09:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.660 09:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.919 [2024-11-20 09:27:48.232633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.919 [2024-11-20 09:27:48.232786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.919 [2024-11-20 09:27:48.244631] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.919 [2024-11-20 09:27:48.244786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.919 [2024-11-20 09:27:48.244817] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.919 [2024-11-20 09:27:48.244846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.919 [2024-11-20 09:27:48.244868] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.919 [2024-11-20 09:27:48.244893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.919 [2024-11-20 09:27:48.299124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.919 BaseBdev1 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.919 [ 00:15:22.919 { 00:15:22.919 "name": "BaseBdev1", 00:15:22.919 "aliases": [ 00:15:22.919 "d2b733c2-71a6-46c5-8533-cb6319b56e45" 00:15:22.919 ], 00:15:22.919 "product_name": "Malloc disk", 00:15:22.919 "block_size": 512, 00:15:22.919 "num_blocks": 65536, 00:15:22.919 "uuid": "d2b733c2-71a6-46c5-8533-cb6319b56e45", 00:15:22.919 "assigned_rate_limits": { 00:15:22.919 "rw_ios_per_sec": 0, 00:15:22.919 "rw_mbytes_per_sec": 0, 00:15:22.919 "r_mbytes_per_sec": 0, 00:15:22.919 "w_mbytes_per_sec": 0 00:15:22.919 }, 00:15:22.919 "claimed": true, 00:15:22.919 "claim_type": "exclusive_write", 00:15:22.919 "zoned": false, 00:15:22.919 "supported_io_types": { 00:15:22.919 "read": true, 00:15:22.919 "write": true, 00:15:22.919 "unmap": true, 00:15:22.919 "flush": true, 00:15:22.919 "reset": true, 00:15:22.919 "nvme_admin": false, 00:15:22.919 "nvme_io": false, 00:15:22.919 "nvme_io_md": false, 00:15:22.919 "write_zeroes": true, 00:15:22.919 "zcopy": true, 00:15:22.919 "get_zone_info": false, 00:15:22.919 "zone_management": false, 00:15:22.919 "zone_append": false, 00:15:22.919 "compare": false, 00:15:22.919 "compare_and_write": false, 00:15:22.919 "abort": true, 00:15:22.919 "seek_hole": false, 00:15:22.919 "seek_data": false, 00:15:22.919 "copy": true, 00:15:22.919 "nvme_iov_md": false 00:15:22.919 }, 00:15:22.919 "memory_domains": [ 00:15:22.919 { 00:15:22.919 "dma_device_id": "system", 00:15:22.919 "dma_device_type": 1 00:15:22.919 }, 00:15:22.919 { 00:15:22.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.919 "dma_device_type": 2 00:15:22.919 } 00:15:22.919 ], 00:15:22.919 "driver_specific": {} 00:15:22.919 } 00:15:22.919 ] 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.919 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.178 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.178 "name": "Existed_Raid", 00:15:23.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.178 "strip_size_kb": 64, 00:15:23.178 "state": "configuring", 00:15:23.178 "raid_level": "raid5f", 00:15:23.178 "superblock": false, 00:15:23.178 "num_base_bdevs": 3, 00:15:23.178 "num_base_bdevs_discovered": 1, 00:15:23.178 "num_base_bdevs_operational": 3, 00:15:23.178 "base_bdevs_list": [ 00:15:23.178 { 00:15:23.178 "name": "BaseBdev1", 00:15:23.178 "uuid": "d2b733c2-71a6-46c5-8533-cb6319b56e45", 00:15:23.178 "is_configured": true, 00:15:23.178 "data_offset": 0, 00:15:23.178 "data_size": 65536 00:15:23.178 }, 00:15:23.178 { 00:15:23.178 "name": "BaseBdev2", 00:15:23.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.178 "is_configured": false, 00:15:23.178 "data_offset": 0, 00:15:23.178 "data_size": 0 00:15:23.178 }, 00:15:23.178 { 00:15:23.178 "name": "BaseBdev3", 00:15:23.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.178 "is_configured": false, 00:15:23.178 "data_offset": 0, 00:15:23.178 "data_size": 0 00:15:23.178 } 00:15:23.178 ] 00:15:23.178 }' 00:15:23.178 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.178 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.437 [2024-11-20 09:27:48.750489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.437 [2024-11-20 09:27:48.750664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.437 [2024-11-20 09:27:48.762514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.437 [2024-11-20 09:27:48.764738] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.437 [2024-11-20 09:27:48.764847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.437 [2024-11-20 09:27:48.764894] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.437 [2024-11-20 09:27:48.764929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.437 "name": "Existed_Raid", 00:15:23.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.437 "strip_size_kb": 64, 00:15:23.437 "state": "configuring", 00:15:23.437 "raid_level": "raid5f", 00:15:23.437 "superblock": false, 00:15:23.437 "num_base_bdevs": 3, 00:15:23.437 "num_base_bdevs_discovered": 1, 00:15:23.437 "num_base_bdevs_operational": 3, 00:15:23.437 "base_bdevs_list": [ 00:15:23.437 { 00:15:23.437 "name": "BaseBdev1", 00:15:23.437 "uuid": "d2b733c2-71a6-46c5-8533-cb6319b56e45", 00:15:23.437 "is_configured": true, 00:15:23.437 "data_offset": 0, 00:15:23.437 "data_size": 65536 00:15:23.437 }, 00:15:23.437 { 00:15:23.437 "name": "BaseBdev2", 00:15:23.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.437 "is_configured": false, 00:15:23.437 "data_offset": 0, 00:15:23.437 "data_size": 0 00:15:23.437 }, 00:15:23.437 { 00:15:23.437 "name": "BaseBdev3", 00:15:23.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.437 "is_configured": false, 00:15:23.437 "data_offset": 0, 00:15:23.437 "data_size": 0 00:15:23.437 } 00:15:23.437 ] 00:15:23.437 }' 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.437 09:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.006 [2024-11-20 09:27:49.263977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.006 BaseBdev2 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.006 [ 00:15:24.006 { 00:15:24.006 "name": "BaseBdev2", 00:15:24.006 "aliases": [ 00:15:24.006 "39bacad6-2ab9-475d-8ba8-51847acf7b8c" 00:15:24.006 ], 00:15:24.006 "product_name": "Malloc disk", 00:15:24.006 "block_size": 512, 00:15:24.006 "num_blocks": 65536, 00:15:24.006 "uuid": "39bacad6-2ab9-475d-8ba8-51847acf7b8c", 00:15:24.006 "assigned_rate_limits": { 00:15:24.006 "rw_ios_per_sec": 0, 00:15:24.006 "rw_mbytes_per_sec": 0, 00:15:24.006 "r_mbytes_per_sec": 0, 00:15:24.006 "w_mbytes_per_sec": 0 00:15:24.006 }, 00:15:24.006 "claimed": true, 00:15:24.006 "claim_type": "exclusive_write", 00:15:24.006 "zoned": false, 00:15:24.006 "supported_io_types": { 00:15:24.006 "read": true, 00:15:24.006 "write": true, 00:15:24.006 "unmap": true, 00:15:24.006 "flush": true, 00:15:24.006 "reset": true, 00:15:24.006 "nvme_admin": false, 00:15:24.006 "nvme_io": false, 00:15:24.006 "nvme_io_md": false, 00:15:24.006 "write_zeroes": true, 00:15:24.006 "zcopy": true, 00:15:24.006 "get_zone_info": false, 00:15:24.006 "zone_management": false, 00:15:24.006 "zone_append": false, 00:15:24.006 "compare": false, 00:15:24.006 "compare_and_write": false, 00:15:24.006 "abort": true, 00:15:24.006 "seek_hole": false, 00:15:24.006 "seek_data": false, 00:15:24.006 "copy": true, 00:15:24.006 "nvme_iov_md": false 00:15:24.006 }, 00:15:24.006 "memory_domains": [ 00:15:24.006 { 00:15:24.006 "dma_device_id": "system", 00:15:24.006 "dma_device_type": 1 00:15:24.006 }, 00:15:24.006 { 00:15:24.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.006 "dma_device_type": 2 00:15:24.006 } 00:15:24.006 ], 00:15:24.006 "driver_specific": {} 00:15:24.006 } 00:15:24.006 ] 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.006 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.007 "name": "Existed_Raid", 00:15:24.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.007 "strip_size_kb": 64, 00:15:24.007 "state": "configuring", 00:15:24.007 "raid_level": "raid5f", 00:15:24.007 "superblock": false, 00:15:24.007 "num_base_bdevs": 3, 00:15:24.007 "num_base_bdevs_discovered": 2, 00:15:24.007 "num_base_bdevs_operational": 3, 00:15:24.007 "base_bdevs_list": [ 00:15:24.007 { 00:15:24.007 "name": "BaseBdev1", 00:15:24.007 "uuid": "d2b733c2-71a6-46c5-8533-cb6319b56e45", 00:15:24.007 "is_configured": true, 00:15:24.007 "data_offset": 0, 00:15:24.007 "data_size": 65536 00:15:24.007 }, 00:15:24.007 { 00:15:24.007 "name": "BaseBdev2", 00:15:24.007 "uuid": "39bacad6-2ab9-475d-8ba8-51847acf7b8c", 00:15:24.007 "is_configured": true, 00:15:24.007 "data_offset": 0, 00:15:24.007 "data_size": 65536 00:15:24.007 }, 00:15:24.007 { 00:15:24.007 "name": "BaseBdev3", 00:15:24.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.007 "is_configured": false, 00:15:24.007 "data_offset": 0, 00:15:24.007 "data_size": 0 00:15:24.007 } 00:15:24.007 ] 00:15:24.007 }' 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.007 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.574 [2024-11-20 09:27:49.819426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.574 [2024-11-20 09:27:49.819622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:24.574 [2024-11-20 09:27:49.819659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:24.574 [2024-11-20 09:27:49.820011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:24.574 [2024-11-20 09:27:49.827113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:24.574 [2024-11-20 09:27:49.827183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:24.574 [2024-11-20 09:27:49.827587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.574 BaseBdev3 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.574 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.574 [ 00:15:24.574 { 00:15:24.574 "name": "BaseBdev3", 00:15:24.574 "aliases": [ 00:15:24.574 "acca42ab-0580-4a87-96fd-ab9cc3b58a22" 00:15:24.574 ], 00:15:24.574 "product_name": "Malloc disk", 00:15:24.574 "block_size": 512, 00:15:24.574 "num_blocks": 65536, 00:15:24.574 "uuid": "acca42ab-0580-4a87-96fd-ab9cc3b58a22", 00:15:24.574 "assigned_rate_limits": { 00:15:24.574 "rw_ios_per_sec": 0, 00:15:24.574 "rw_mbytes_per_sec": 0, 00:15:24.574 "r_mbytes_per_sec": 0, 00:15:24.574 "w_mbytes_per_sec": 0 00:15:24.574 }, 00:15:24.574 "claimed": true, 00:15:24.574 "claim_type": "exclusive_write", 00:15:24.574 "zoned": false, 00:15:24.574 "supported_io_types": { 00:15:24.574 "read": true, 00:15:24.574 "write": true, 00:15:24.574 "unmap": true, 00:15:24.574 "flush": true, 00:15:24.574 "reset": true, 00:15:24.574 "nvme_admin": false, 00:15:24.574 "nvme_io": false, 00:15:24.574 "nvme_io_md": false, 00:15:24.574 "write_zeroes": true, 00:15:24.574 "zcopy": true, 00:15:24.574 "get_zone_info": false, 00:15:24.574 "zone_management": false, 00:15:24.574 "zone_append": false, 00:15:24.574 "compare": false, 00:15:24.574 "compare_and_write": false, 00:15:24.574 "abort": true, 00:15:24.574 "seek_hole": false, 00:15:24.574 "seek_data": false, 00:15:24.574 "copy": true, 00:15:24.574 "nvme_iov_md": false 00:15:24.574 }, 00:15:24.574 "memory_domains": [ 00:15:24.575 { 00:15:24.575 "dma_device_id": "system", 00:15:24.575 "dma_device_type": 1 00:15:24.575 }, 00:15:24.575 { 00:15:24.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.575 "dma_device_type": 2 00:15:24.575 } 00:15:24.575 ], 00:15:24.575 "driver_specific": {} 00:15:24.575 } 00:15:24.575 ] 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.575 "name": "Existed_Raid", 00:15:24.575 "uuid": "2ba08d00-3db5-4a8f-8d21-9a3b99f7a6e0", 00:15:24.575 "strip_size_kb": 64, 00:15:24.575 "state": "online", 00:15:24.575 "raid_level": "raid5f", 00:15:24.575 "superblock": false, 00:15:24.575 "num_base_bdevs": 3, 00:15:24.575 "num_base_bdevs_discovered": 3, 00:15:24.575 "num_base_bdevs_operational": 3, 00:15:24.575 "base_bdevs_list": [ 00:15:24.575 { 00:15:24.575 "name": "BaseBdev1", 00:15:24.575 "uuid": "d2b733c2-71a6-46c5-8533-cb6319b56e45", 00:15:24.575 "is_configured": true, 00:15:24.575 "data_offset": 0, 00:15:24.575 "data_size": 65536 00:15:24.575 }, 00:15:24.575 { 00:15:24.575 "name": "BaseBdev2", 00:15:24.575 "uuid": "39bacad6-2ab9-475d-8ba8-51847acf7b8c", 00:15:24.575 "is_configured": true, 00:15:24.575 "data_offset": 0, 00:15:24.575 "data_size": 65536 00:15:24.575 }, 00:15:24.575 { 00:15:24.575 "name": "BaseBdev3", 00:15:24.575 "uuid": "acca42ab-0580-4a87-96fd-ab9cc3b58a22", 00:15:24.575 "is_configured": true, 00:15:24.575 "data_offset": 0, 00:15:24.575 "data_size": 65536 00:15:24.575 } 00:15:24.575 ] 00:15:24.575 }' 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.575 09:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.142 [2024-11-20 09:27:50.350551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.142 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:25.142 "name": "Existed_Raid", 00:15:25.142 "aliases": [ 00:15:25.142 "2ba08d00-3db5-4a8f-8d21-9a3b99f7a6e0" 00:15:25.142 ], 00:15:25.142 "product_name": "Raid Volume", 00:15:25.142 "block_size": 512, 00:15:25.142 "num_blocks": 131072, 00:15:25.142 "uuid": "2ba08d00-3db5-4a8f-8d21-9a3b99f7a6e0", 00:15:25.142 "assigned_rate_limits": { 00:15:25.142 "rw_ios_per_sec": 0, 00:15:25.142 "rw_mbytes_per_sec": 0, 00:15:25.142 "r_mbytes_per_sec": 0, 00:15:25.142 "w_mbytes_per_sec": 0 00:15:25.142 }, 00:15:25.142 "claimed": false, 00:15:25.142 "zoned": false, 00:15:25.142 "supported_io_types": { 00:15:25.142 "read": true, 00:15:25.142 "write": true, 00:15:25.142 "unmap": false, 00:15:25.142 "flush": false, 00:15:25.142 "reset": true, 00:15:25.142 "nvme_admin": false, 00:15:25.142 "nvme_io": false, 00:15:25.142 "nvme_io_md": false, 00:15:25.142 "write_zeroes": true, 00:15:25.142 "zcopy": false, 00:15:25.142 "get_zone_info": false, 00:15:25.142 "zone_management": false, 00:15:25.142 "zone_append": false, 00:15:25.142 "compare": false, 00:15:25.142 "compare_and_write": false, 00:15:25.142 "abort": false, 00:15:25.142 "seek_hole": false, 00:15:25.142 "seek_data": false, 00:15:25.142 "copy": false, 00:15:25.142 "nvme_iov_md": false 00:15:25.142 }, 00:15:25.142 "driver_specific": { 00:15:25.142 "raid": { 00:15:25.142 "uuid": "2ba08d00-3db5-4a8f-8d21-9a3b99f7a6e0", 00:15:25.142 "strip_size_kb": 64, 00:15:25.142 "state": "online", 00:15:25.142 "raid_level": "raid5f", 00:15:25.142 "superblock": false, 00:15:25.142 "num_base_bdevs": 3, 00:15:25.142 "num_base_bdevs_discovered": 3, 00:15:25.142 "num_base_bdevs_operational": 3, 00:15:25.142 "base_bdevs_list": [ 00:15:25.142 { 00:15:25.142 "name": "BaseBdev1", 00:15:25.142 "uuid": "d2b733c2-71a6-46c5-8533-cb6319b56e45", 00:15:25.142 "is_configured": true, 00:15:25.142 "data_offset": 0, 00:15:25.142 "data_size": 65536 00:15:25.142 }, 00:15:25.142 { 00:15:25.142 "name": "BaseBdev2", 00:15:25.142 "uuid": "39bacad6-2ab9-475d-8ba8-51847acf7b8c", 00:15:25.142 "is_configured": true, 00:15:25.142 "data_offset": 0, 00:15:25.142 "data_size": 65536 00:15:25.142 }, 00:15:25.142 { 00:15:25.142 "name": "BaseBdev3", 00:15:25.142 "uuid": "acca42ab-0580-4a87-96fd-ab9cc3b58a22", 00:15:25.142 "is_configured": true, 00:15:25.142 "data_offset": 0, 00:15:25.142 "data_size": 65536 00:15:25.143 } 00:15:25.143 ] 00:15:25.143 } 00:15:25.143 } 00:15:25.143 }' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:25.143 BaseBdev2 00:15:25.143 BaseBdev3' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.143 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.404 [2024-11-20 09:27:50.645852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.404 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.404 "name": "Existed_Raid", 00:15:25.404 "uuid": "2ba08d00-3db5-4a8f-8d21-9a3b99f7a6e0", 00:15:25.404 "strip_size_kb": 64, 00:15:25.404 "state": "online", 00:15:25.404 "raid_level": "raid5f", 00:15:25.404 "superblock": false, 00:15:25.404 "num_base_bdevs": 3, 00:15:25.404 "num_base_bdevs_discovered": 2, 00:15:25.404 "num_base_bdevs_operational": 2, 00:15:25.404 "base_bdevs_list": [ 00:15:25.405 { 00:15:25.405 "name": null, 00:15:25.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.405 "is_configured": false, 00:15:25.405 "data_offset": 0, 00:15:25.405 "data_size": 65536 00:15:25.405 }, 00:15:25.405 { 00:15:25.405 "name": "BaseBdev2", 00:15:25.405 "uuid": "39bacad6-2ab9-475d-8ba8-51847acf7b8c", 00:15:25.405 "is_configured": true, 00:15:25.405 "data_offset": 0, 00:15:25.405 "data_size": 65536 00:15:25.405 }, 00:15:25.405 { 00:15:25.405 "name": "BaseBdev3", 00:15:25.405 "uuid": "acca42ab-0580-4a87-96fd-ab9cc3b58a22", 00:15:25.405 "is_configured": true, 00:15:25.405 "data_offset": 0, 00:15:25.405 "data_size": 65536 00:15:25.405 } 00:15:25.405 ] 00:15:25.405 }' 00:15:25.405 09:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.405 09:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.982 [2024-11-20 09:27:51.270642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.982 [2024-11-20 09:27:51.270880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.982 [2024-11-20 09:27:51.389590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.982 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 [2024-11-20 09:27:51.449639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.240 [2024-11-20 09:27:51.449811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 BaseBdev2 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.240 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.240 [ 00:15:26.240 { 00:15:26.240 "name": "BaseBdev2", 00:15:26.240 "aliases": [ 00:15:26.240 "c023a066-106d-4720-992a-023db72e85fe" 00:15:26.240 ], 00:15:26.240 "product_name": "Malloc disk", 00:15:26.240 "block_size": 512, 00:15:26.240 "num_blocks": 65536, 00:15:26.240 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:26.240 "assigned_rate_limits": { 00:15:26.240 "rw_ios_per_sec": 0, 00:15:26.240 "rw_mbytes_per_sec": 0, 00:15:26.240 "r_mbytes_per_sec": 0, 00:15:26.240 "w_mbytes_per_sec": 0 00:15:26.240 }, 00:15:26.240 "claimed": false, 00:15:26.240 "zoned": false, 00:15:26.240 "supported_io_types": { 00:15:26.240 "read": true, 00:15:26.240 "write": true, 00:15:26.240 "unmap": true, 00:15:26.240 "flush": true, 00:15:26.499 "reset": true, 00:15:26.499 "nvme_admin": false, 00:15:26.499 "nvme_io": false, 00:15:26.499 "nvme_io_md": false, 00:15:26.499 "write_zeroes": true, 00:15:26.499 "zcopy": true, 00:15:26.499 "get_zone_info": false, 00:15:26.499 "zone_management": false, 00:15:26.499 "zone_append": false, 00:15:26.499 "compare": false, 00:15:26.499 "compare_and_write": false, 00:15:26.499 "abort": true, 00:15:26.499 "seek_hole": false, 00:15:26.499 "seek_data": false, 00:15:26.499 "copy": true, 00:15:26.499 "nvme_iov_md": false 00:15:26.499 }, 00:15:26.499 "memory_domains": [ 00:15:26.499 { 00:15:26.499 "dma_device_id": "system", 00:15:26.499 "dma_device_type": 1 00:15:26.499 }, 00:15:26.499 { 00:15:26.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.499 "dma_device_type": 2 00:15:26.499 } 00:15:26.499 ], 00:15:26.499 "driver_specific": {} 00:15:26.499 } 00:15:26.499 ] 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.499 BaseBdev3 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.499 [ 00:15:26.499 { 00:15:26.499 "name": "BaseBdev3", 00:15:26.499 "aliases": [ 00:15:26.499 "f3003ce7-9ff4-4045-8a26-fe77780b3615" 00:15:26.499 ], 00:15:26.499 "product_name": "Malloc disk", 00:15:26.499 "block_size": 512, 00:15:26.499 "num_blocks": 65536, 00:15:26.499 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:26.499 "assigned_rate_limits": { 00:15:26.499 "rw_ios_per_sec": 0, 00:15:26.499 "rw_mbytes_per_sec": 0, 00:15:26.499 "r_mbytes_per_sec": 0, 00:15:26.499 "w_mbytes_per_sec": 0 00:15:26.499 }, 00:15:26.499 "claimed": false, 00:15:26.499 "zoned": false, 00:15:26.499 "supported_io_types": { 00:15:26.499 "read": true, 00:15:26.499 "write": true, 00:15:26.499 "unmap": true, 00:15:26.499 "flush": true, 00:15:26.499 "reset": true, 00:15:26.499 "nvme_admin": false, 00:15:26.499 "nvme_io": false, 00:15:26.499 "nvme_io_md": false, 00:15:26.499 "write_zeroes": true, 00:15:26.499 "zcopy": true, 00:15:26.499 "get_zone_info": false, 00:15:26.499 "zone_management": false, 00:15:26.499 "zone_append": false, 00:15:26.499 "compare": false, 00:15:26.499 "compare_and_write": false, 00:15:26.499 "abort": true, 00:15:26.499 "seek_hole": false, 00:15:26.499 "seek_data": false, 00:15:26.499 "copy": true, 00:15:26.499 "nvme_iov_md": false 00:15:26.499 }, 00:15:26.499 "memory_domains": [ 00:15:26.499 { 00:15:26.499 "dma_device_id": "system", 00:15:26.499 "dma_device_type": 1 00:15:26.499 }, 00:15:26.499 { 00:15:26.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.499 "dma_device_type": 2 00:15:26.499 } 00:15:26.499 ], 00:15:26.499 "driver_specific": {} 00:15:26.499 } 00:15:26.499 ] 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.499 [2024-11-20 09:27:51.794492] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.499 [2024-11-20 09:27:51.794650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.499 [2024-11-20 09:27:51.794710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.499 [2024-11-20 09:27:51.796976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.499 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.499 "name": "Existed_Raid", 00:15:26.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.499 "strip_size_kb": 64, 00:15:26.499 "state": "configuring", 00:15:26.499 "raid_level": "raid5f", 00:15:26.499 "superblock": false, 00:15:26.499 "num_base_bdevs": 3, 00:15:26.499 "num_base_bdevs_discovered": 2, 00:15:26.499 "num_base_bdevs_operational": 3, 00:15:26.499 "base_bdevs_list": [ 00:15:26.500 { 00:15:26.500 "name": "BaseBdev1", 00:15:26.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.500 "is_configured": false, 00:15:26.500 "data_offset": 0, 00:15:26.500 "data_size": 0 00:15:26.500 }, 00:15:26.500 { 00:15:26.500 "name": "BaseBdev2", 00:15:26.500 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:26.500 "is_configured": true, 00:15:26.500 "data_offset": 0, 00:15:26.500 "data_size": 65536 00:15:26.500 }, 00:15:26.500 { 00:15:26.500 "name": "BaseBdev3", 00:15:26.500 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:26.500 "is_configured": true, 00:15:26.500 "data_offset": 0, 00:15:26.500 "data_size": 65536 00:15:26.500 } 00:15:26.500 ] 00:15:26.500 }' 00:15:26.500 09:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.500 09:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.067 [2024-11-20 09:27:52.249713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.067 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.067 "name": "Existed_Raid", 00:15:27.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.067 "strip_size_kb": 64, 00:15:27.067 "state": "configuring", 00:15:27.067 "raid_level": "raid5f", 00:15:27.068 "superblock": false, 00:15:27.068 "num_base_bdevs": 3, 00:15:27.068 "num_base_bdevs_discovered": 1, 00:15:27.068 "num_base_bdevs_operational": 3, 00:15:27.068 "base_bdevs_list": [ 00:15:27.068 { 00:15:27.068 "name": "BaseBdev1", 00:15:27.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.068 "is_configured": false, 00:15:27.068 "data_offset": 0, 00:15:27.068 "data_size": 0 00:15:27.068 }, 00:15:27.068 { 00:15:27.068 "name": null, 00:15:27.068 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:27.068 "is_configured": false, 00:15:27.068 "data_offset": 0, 00:15:27.068 "data_size": 65536 00:15:27.068 }, 00:15:27.068 { 00:15:27.068 "name": "BaseBdev3", 00:15:27.068 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:27.068 "is_configured": true, 00:15:27.068 "data_offset": 0, 00:15:27.068 "data_size": 65536 00:15:27.068 } 00:15:27.068 ] 00:15:27.068 }' 00:15:27.068 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.068 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.326 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.326 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:27.326 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.326 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.326 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.326 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:27.326 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:27.326 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.326 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.326 [2024-11-20 09:27:52.777090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.584 BaseBdev1 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.584 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.584 [ 00:15:27.584 { 00:15:27.584 "name": "BaseBdev1", 00:15:27.584 "aliases": [ 00:15:27.584 "7641a743-639a-46d7-a06a-2b4b45cf999d" 00:15:27.584 ], 00:15:27.584 "product_name": "Malloc disk", 00:15:27.584 "block_size": 512, 00:15:27.584 "num_blocks": 65536, 00:15:27.584 "uuid": "7641a743-639a-46d7-a06a-2b4b45cf999d", 00:15:27.584 "assigned_rate_limits": { 00:15:27.584 "rw_ios_per_sec": 0, 00:15:27.584 "rw_mbytes_per_sec": 0, 00:15:27.584 "r_mbytes_per_sec": 0, 00:15:27.584 "w_mbytes_per_sec": 0 00:15:27.584 }, 00:15:27.584 "claimed": true, 00:15:27.584 "claim_type": "exclusive_write", 00:15:27.584 "zoned": false, 00:15:27.584 "supported_io_types": { 00:15:27.584 "read": true, 00:15:27.584 "write": true, 00:15:27.584 "unmap": true, 00:15:27.585 "flush": true, 00:15:27.585 "reset": true, 00:15:27.585 "nvme_admin": false, 00:15:27.585 "nvme_io": false, 00:15:27.585 "nvme_io_md": false, 00:15:27.585 "write_zeroes": true, 00:15:27.585 "zcopy": true, 00:15:27.585 "get_zone_info": false, 00:15:27.585 "zone_management": false, 00:15:27.585 "zone_append": false, 00:15:27.585 "compare": false, 00:15:27.585 "compare_and_write": false, 00:15:27.585 "abort": true, 00:15:27.585 "seek_hole": false, 00:15:27.585 "seek_data": false, 00:15:27.585 "copy": true, 00:15:27.585 "nvme_iov_md": false 00:15:27.585 }, 00:15:27.585 "memory_domains": [ 00:15:27.585 { 00:15:27.585 "dma_device_id": "system", 00:15:27.585 "dma_device_type": 1 00:15:27.585 }, 00:15:27.585 { 00:15:27.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.585 "dma_device_type": 2 00:15:27.585 } 00:15:27.585 ], 00:15:27.585 "driver_specific": {} 00:15:27.585 } 00:15:27.585 ] 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.585 "name": "Existed_Raid", 00:15:27.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.585 "strip_size_kb": 64, 00:15:27.585 "state": "configuring", 00:15:27.585 "raid_level": "raid5f", 00:15:27.585 "superblock": false, 00:15:27.585 "num_base_bdevs": 3, 00:15:27.585 "num_base_bdevs_discovered": 2, 00:15:27.585 "num_base_bdevs_operational": 3, 00:15:27.585 "base_bdevs_list": [ 00:15:27.585 { 00:15:27.585 "name": "BaseBdev1", 00:15:27.585 "uuid": "7641a743-639a-46d7-a06a-2b4b45cf999d", 00:15:27.585 "is_configured": true, 00:15:27.585 "data_offset": 0, 00:15:27.585 "data_size": 65536 00:15:27.585 }, 00:15:27.585 { 00:15:27.585 "name": null, 00:15:27.585 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:27.585 "is_configured": false, 00:15:27.585 "data_offset": 0, 00:15:27.585 "data_size": 65536 00:15:27.585 }, 00:15:27.585 { 00:15:27.585 "name": "BaseBdev3", 00:15:27.585 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:27.585 "is_configured": true, 00:15:27.585 "data_offset": 0, 00:15:27.585 "data_size": 65536 00:15:27.585 } 00:15:27.585 ] 00:15:27.585 }' 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.585 09:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.152 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.152 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.152 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.152 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:28.152 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.152 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:28.152 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.153 [2024-11-20 09:27:53.352234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.153 "name": "Existed_Raid", 00:15:28.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.153 "strip_size_kb": 64, 00:15:28.153 "state": "configuring", 00:15:28.153 "raid_level": "raid5f", 00:15:28.153 "superblock": false, 00:15:28.153 "num_base_bdevs": 3, 00:15:28.153 "num_base_bdevs_discovered": 1, 00:15:28.153 "num_base_bdevs_operational": 3, 00:15:28.153 "base_bdevs_list": [ 00:15:28.153 { 00:15:28.153 "name": "BaseBdev1", 00:15:28.153 "uuid": "7641a743-639a-46d7-a06a-2b4b45cf999d", 00:15:28.153 "is_configured": true, 00:15:28.153 "data_offset": 0, 00:15:28.153 "data_size": 65536 00:15:28.153 }, 00:15:28.153 { 00:15:28.153 "name": null, 00:15:28.153 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:28.153 "is_configured": false, 00:15:28.153 "data_offset": 0, 00:15:28.153 "data_size": 65536 00:15:28.153 }, 00:15:28.153 { 00:15:28.153 "name": null, 00:15:28.153 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:28.153 "is_configured": false, 00:15:28.153 "data_offset": 0, 00:15:28.153 "data_size": 65536 00:15:28.153 } 00:15:28.153 ] 00:15:28.153 }' 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.153 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.411 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.411 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.411 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:28.411 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.411 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.411 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:28.411 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:28.411 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.411 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.411 [2024-11-20 09:27:53.859513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.670 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.670 "name": "Existed_Raid", 00:15:28.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.670 "strip_size_kb": 64, 00:15:28.670 "state": "configuring", 00:15:28.670 "raid_level": "raid5f", 00:15:28.670 "superblock": false, 00:15:28.670 "num_base_bdevs": 3, 00:15:28.670 "num_base_bdevs_discovered": 2, 00:15:28.670 "num_base_bdevs_operational": 3, 00:15:28.670 "base_bdevs_list": [ 00:15:28.670 { 00:15:28.670 "name": "BaseBdev1", 00:15:28.670 "uuid": "7641a743-639a-46d7-a06a-2b4b45cf999d", 00:15:28.670 "is_configured": true, 00:15:28.670 "data_offset": 0, 00:15:28.670 "data_size": 65536 00:15:28.670 }, 00:15:28.670 { 00:15:28.670 "name": null, 00:15:28.671 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:28.671 "is_configured": false, 00:15:28.671 "data_offset": 0, 00:15:28.671 "data_size": 65536 00:15:28.671 }, 00:15:28.671 { 00:15:28.671 "name": "BaseBdev3", 00:15:28.671 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:28.671 "is_configured": true, 00:15:28.671 "data_offset": 0, 00:15:28.671 "data_size": 65536 00:15:28.671 } 00:15:28.671 ] 00:15:28.671 }' 00:15:28.671 09:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.671 09:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.929 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.929 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:28.929 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.929 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.929 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.189 [2024-11-20 09:27:54.386644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.189 "name": "Existed_Raid", 00:15:29.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.189 "strip_size_kb": 64, 00:15:29.189 "state": "configuring", 00:15:29.189 "raid_level": "raid5f", 00:15:29.189 "superblock": false, 00:15:29.189 "num_base_bdevs": 3, 00:15:29.189 "num_base_bdevs_discovered": 1, 00:15:29.189 "num_base_bdevs_operational": 3, 00:15:29.189 "base_bdevs_list": [ 00:15:29.189 { 00:15:29.189 "name": null, 00:15:29.189 "uuid": "7641a743-639a-46d7-a06a-2b4b45cf999d", 00:15:29.189 "is_configured": false, 00:15:29.189 "data_offset": 0, 00:15:29.189 "data_size": 65536 00:15:29.189 }, 00:15:29.189 { 00:15:29.189 "name": null, 00:15:29.189 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:29.189 "is_configured": false, 00:15:29.189 "data_offset": 0, 00:15:29.189 "data_size": 65536 00:15:29.189 }, 00:15:29.189 { 00:15:29.189 "name": "BaseBdev3", 00:15:29.189 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:29.189 "is_configured": true, 00:15:29.189 "data_offset": 0, 00:15:29.189 "data_size": 65536 00:15:29.189 } 00:15:29.189 ] 00:15:29.189 }' 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.189 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.758 [2024-11-20 09:27:54.986444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.758 09:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.758 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.758 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.758 "name": "Existed_Raid", 00:15:29.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.758 "strip_size_kb": 64, 00:15:29.758 "state": "configuring", 00:15:29.758 "raid_level": "raid5f", 00:15:29.758 "superblock": false, 00:15:29.758 "num_base_bdevs": 3, 00:15:29.758 "num_base_bdevs_discovered": 2, 00:15:29.758 "num_base_bdevs_operational": 3, 00:15:29.758 "base_bdevs_list": [ 00:15:29.758 { 00:15:29.758 "name": null, 00:15:29.758 "uuid": "7641a743-639a-46d7-a06a-2b4b45cf999d", 00:15:29.758 "is_configured": false, 00:15:29.758 "data_offset": 0, 00:15:29.758 "data_size": 65536 00:15:29.758 }, 00:15:29.758 { 00:15:29.758 "name": "BaseBdev2", 00:15:29.758 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:29.758 "is_configured": true, 00:15:29.758 "data_offset": 0, 00:15:29.758 "data_size": 65536 00:15:29.758 }, 00:15:29.758 { 00:15:29.758 "name": "BaseBdev3", 00:15:29.758 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:29.758 "is_configured": true, 00:15:29.758 "data_offset": 0, 00:15:29.758 "data_size": 65536 00:15:29.758 } 00:15:29.758 ] 00:15:29.758 }' 00:15:29.758 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.758 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.327 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7641a743-639a-46d7-a06a-2b4b45cf999d 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.328 [2024-11-20 09:27:55.621190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:30.328 [2024-11-20 09:27:55.621257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:30.328 [2024-11-20 09:27:55.621268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:30.328 [2024-11-20 09:27:55.621587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:30.328 [2024-11-20 09:27:55.628416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:30.328 [2024-11-20 09:27:55.628456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:30.328 [2024-11-20 09:27:55.628764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.328 NewBaseBdev 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.328 [ 00:15:30.328 { 00:15:30.328 "name": "NewBaseBdev", 00:15:30.328 "aliases": [ 00:15:30.328 "7641a743-639a-46d7-a06a-2b4b45cf999d" 00:15:30.328 ], 00:15:30.328 "product_name": "Malloc disk", 00:15:30.328 "block_size": 512, 00:15:30.328 "num_blocks": 65536, 00:15:30.328 "uuid": "7641a743-639a-46d7-a06a-2b4b45cf999d", 00:15:30.328 "assigned_rate_limits": { 00:15:30.328 "rw_ios_per_sec": 0, 00:15:30.328 "rw_mbytes_per_sec": 0, 00:15:30.328 "r_mbytes_per_sec": 0, 00:15:30.328 "w_mbytes_per_sec": 0 00:15:30.328 }, 00:15:30.328 "claimed": true, 00:15:30.328 "claim_type": "exclusive_write", 00:15:30.328 "zoned": false, 00:15:30.328 "supported_io_types": { 00:15:30.328 "read": true, 00:15:30.328 "write": true, 00:15:30.328 "unmap": true, 00:15:30.328 "flush": true, 00:15:30.328 "reset": true, 00:15:30.328 "nvme_admin": false, 00:15:30.328 "nvme_io": false, 00:15:30.328 "nvme_io_md": false, 00:15:30.328 "write_zeroes": true, 00:15:30.328 "zcopy": true, 00:15:30.328 "get_zone_info": false, 00:15:30.328 "zone_management": false, 00:15:30.328 "zone_append": false, 00:15:30.328 "compare": false, 00:15:30.328 "compare_and_write": false, 00:15:30.328 "abort": true, 00:15:30.328 "seek_hole": false, 00:15:30.328 "seek_data": false, 00:15:30.328 "copy": true, 00:15:30.328 "nvme_iov_md": false 00:15:30.328 }, 00:15:30.328 "memory_domains": [ 00:15:30.328 { 00:15:30.328 "dma_device_id": "system", 00:15:30.328 "dma_device_type": 1 00:15:30.328 }, 00:15:30.328 { 00:15:30.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.328 "dma_device_type": 2 00:15:30.328 } 00:15:30.328 ], 00:15:30.328 "driver_specific": {} 00:15:30.328 } 00:15:30.328 ] 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.328 "name": "Existed_Raid", 00:15:30.328 "uuid": "8948edde-f258-4371-9914-873faba0233f", 00:15:30.328 "strip_size_kb": 64, 00:15:30.328 "state": "online", 00:15:30.328 "raid_level": "raid5f", 00:15:30.328 "superblock": false, 00:15:30.328 "num_base_bdevs": 3, 00:15:30.328 "num_base_bdevs_discovered": 3, 00:15:30.328 "num_base_bdevs_operational": 3, 00:15:30.328 "base_bdevs_list": [ 00:15:30.328 { 00:15:30.328 "name": "NewBaseBdev", 00:15:30.328 "uuid": "7641a743-639a-46d7-a06a-2b4b45cf999d", 00:15:30.328 "is_configured": true, 00:15:30.328 "data_offset": 0, 00:15:30.328 "data_size": 65536 00:15:30.328 }, 00:15:30.328 { 00:15:30.328 "name": "BaseBdev2", 00:15:30.328 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:30.328 "is_configured": true, 00:15:30.328 "data_offset": 0, 00:15:30.328 "data_size": 65536 00:15:30.328 }, 00:15:30.328 { 00:15:30.328 "name": "BaseBdev3", 00:15:30.328 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:30.328 "is_configured": true, 00:15:30.328 "data_offset": 0, 00:15:30.328 "data_size": 65536 00:15:30.328 } 00:15:30.328 ] 00:15:30.328 }' 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.328 09:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.897 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.898 [2024-11-20 09:27:56.148232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:30.898 "name": "Existed_Raid", 00:15:30.898 "aliases": [ 00:15:30.898 "8948edde-f258-4371-9914-873faba0233f" 00:15:30.898 ], 00:15:30.898 "product_name": "Raid Volume", 00:15:30.898 "block_size": 512, 00:15:30.898 "num_blocks": 131072, 00:15:30.898 "uuid": "8948edde-f258-4371-9914-873faba0233f", 00:15:30.898 "assigned_rate_limits": { 00:15:30.898 "rw_ios_per_sec": 0, 00:15:30.898 "rw_mbytes_per_sec": 0, 00:15:30.898 "r_mbytes_per_sec": 0, 00:15:30.898 "w_mbytes_per_sec": 0 00:15:30.898 }, 00:15:30.898 "claimed": false, 00:15:30.898 "zoned": false, 00:15:30.898 "supported_io_types": { 00:15:30.898 "read": true, 00:15:30.898 "write": true, 00:15:30.898 "unmap": false, 00:15:30.898 "flush": false, 00:15:30.898 "reset": true, 00:15:30.898 "nvme_admin": false, 00:15:30.898 "nvme_io": false, 00:15:30.898 "nvme_io_md": false, 00:15:30.898 "write_zeroes": true, 00:15:30.898 "zcopy": false, 00:15:30.898 "get_zone_info": false, 00:15:30.898 "zone_management": false, 00:15:30.898 "zone_append": false, 00:15:30.898 "compare": false, 00:15:30.898 "compare_and_write": false, 00:15:30.898 "abort": false, 00:15:30.898 "seek_hole": false, 00:15:30.898 "seek_data": false, 00:15:30.898 "copy": false, 00:15:30.898 "nvme_iov_md": false 00:15:30.898 }, 00:15:30.898 "driver_specific": { 00:15:30.898 "raid": { 00:15:30.898 "uuid": "8948edde-f258-4371-9914-873faba0233f", 00:15:30.898 "strip_size_kb": 64, 00:15:30.898 "state": "online", 00:15:30.898 "raid_level": "raid5f", 00:15:30.898 "superblock": false, 00:15:30.898 "num_base_bdevs": 3, 00:15:30.898 "num_base_bdevs_discovered": 3, 00:15:30.898 "num_base_bdevs_operational": 3, 00:15:30.898 "base_bdevs_list": [ 00:15:30.898 { 00:15:30.898 "name": "NewBaseBdev", 00:15:30.898 "uuid": "7641a743-639a-46d7-a06a-2b4b45cf999d", 00:15:30.898 "is_configured": true, 00:15:30.898 "data_offset": 0, 00:15:30.898 "data_size": 65536 00:15:30.898 }, 00:15:30.898 { 00:15:30.898 "name": "BaseBdev2", 00:15:30.898 "uuid": "c023a066-106d-4720-992a-023db72e85fe", 00:15:30.898 "is_configured": true, 00:15:30.898 "data_offset": 0, 00:15:30.898 "data_size": 65536 00:15:30.898 }, 00:15:30.898 { 00:15:30.898 "name": "BaseBdev3", 00:15:30.898 "uuid": "f3003ce7-9ff4-4045-8a26-fe77780b3615", 00:15:30.898 "is_configured": true, 00:15:30.898 "data_offset": 0, 00:15:30.898 "data_size": 65536 00:15:30.898 } 00:15:30.898 ] 00:15:30.898 } 00:15:30.898 } 00:15:30.898 }' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:30.898 BaseBdev2 00:15:30.898 BaseBdev3' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:30.898 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.157 [2024-11-20 09:27:56.407608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.157 [2024-11-20 09:27:56.407661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.157 [2024-11-20 09:27:56.407776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.157 [2024-11-20 09:27:56.408118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.157 [2024-11-20 09:27:56.408147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80281 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80281 ']' 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80281 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80281 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80281' 00:15:31.157 killing process with pid 80281 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80281 00:15:31.157 [2024-11-20 09:27:56.449490] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.157 09:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80281 00:15:31.415 [2024-11-20 09:27:56.805369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:32.793 00:15:32.793 real 0m11.370s 00:15:32.793 user 0m17.708s 00:15:32.793 sys 0m2.155s 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.793 ************************************ 00:15:32.793 END TEST raid5f_state_function_test 00:15:32.793 ************************************ 00:15:32.793 09:27:58 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:32.793 09:27:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:32.793 09:27:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.793 09:27:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.793 ************************************ 00:15:32.793 START TEST raid5f_state_function_test_sb 00:15:32.793 ************************************ 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80908 00:15:32.793 Process raid pid: 80908 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80908' 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80908 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80908 ']' 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.793 09:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.052 [2024-11-20 09:27:58.321324] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:15:33.052 [2024-11-20 09:27:58.321606] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.311 [2024-11-20 09:27:58.517554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.311 [2024-11-20 09:27:58.674162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.571 [2024-11-20 09:27:58.923236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.571 [2024-11-20 09:27:58.923285] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.829 [2024-11-20 09:27:59.215686] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.829 [2024-11-20 09:27:59.215765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.829 [2024-11-20 09:27:59.215778] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.829 [2024-11-20 09:27:59.215791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.829 [2024-11-20 09:27:59.215800] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.829 [2024-11-20 09:27:59.215812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.829 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.830 "name": "Existed_Raid", 00:15:33.830 "uuid": "5cf9090c-33b8-4ae6-a97c-cdb28c3fbeee", 00:15:33.830 "strip_size_kb": 64, 00:15:33.830 "state": "configuring", 00:15:33.830 "raid_level": "raid5f", 00:15:33.830 "superblock": true, 00:15:33.830 "num_base_bdevs": 3, 00:15:33.830 "num_base_bdevs_discovered": 0, 00:15:33.830 "num_base_bdevs_operational": 3, 00:15:33.830 "base_bdevs_list": [ 00:15:33.830 { 00:15:33.830 "name": "BaseBdev1", 00:15:33.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.830 "is_configured": false, 00:15:33.830 "data_offset": 0, 00:15:33.830 "data_size": 0 00:15:33.830 }, 00:15:33.830 { 00:15:33.830 "name": "BaseBdev2", 00:15:33.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.830 "is_configured": false, 00:15:33.830 "data_offset": 0, 00:15:33.830 "data_size": 0 00:15:33.830 }, 00:15:33.830 { 00:15:33.830 "name": "BaseBdev3", 00:15:33.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.830 "is_configured": false, 00:15:33.830 "data_offset": 0, 00:15:33.830 "data_size": 0 00:15:33.830 } 00:15:33.830 ] 00:15:33.830 }' 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.830 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 [2024-11-20 09:27:59.635151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.397 [2024-11-20 09:27:59.635196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 [2024-11-20 09:27:59.643156] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.397 [2024-11-20 09:27:59.643209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.397 [2024-11-20 09:27:59.643220] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.397 [2024-11-20 09:27:59.643232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.397 [2024-11-20 09:27:59.643242] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.397 [2024-11-20 09:27:59.643253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 [2024-11-20 09:27:59.694551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.397 BaseBdev1 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 [ 00:15:34.397 { 00:15:34.397 "name": "BaseBdev1", 00:15:34.397 "aliases": [ 00:15:34.397 "f7fa42d1-851d-44cd-b1f8-689a93aed1a2" 00:15:34.397 ], 00:15:34.397 "product_name": "Malloc disk", 00:15:34.397 "block_size": 512, 00:15:34.397 "num_blocks": 65536, 00:15:34.397 "uuid": "f7fa42d1-851d-44cd-b1f8-689a93aed1a2", 00:15:34.397 "assigned_rate_limits": { 00:15:34.397 "rw_ios_per_sec": 0, 00:15:34.397 "rw_mbytes_per_sec": 0, 00:15:34.397 "r_mbytes_per_sec": 0, 00:15:34.397 "w_mbytes_per_sec": 0 00:15:34.397 }, 00:15:34.397 "claimed": true, 00:15:34.397 "claim_type": "exclusive_write", 00:15:34.397 "zoned": false, 00:15:34.397 "supported_io_types": { 00:15:34.397 "read": true, 00:15:34.397 "write": true, 00:15:34.397 "unmap": true, 00:15:34.397 "flush": true, 00:15:34.397 "reset": true, 00:15:34.397 "nvme_admin": false, 00:15:34.397 "nvme_io": false, 00:15:34.397 "nvme_io_md": false, 00:15:34.397 "write_zeroes": true, 00:15:34.397 "zcopy": true, 00:15:34.397 "get_zone_info": false, 00:15:34.397 "zone_management": false, 00:15:34.397 "zone_append": false, 00:15:34.397 "compare": false, 00:15:34.397 "compare_and_write": false, 00:15:34.397 "abort": true, 00:15:34.397 "seek_hole": false, 00:15:34.397 "seek_data": false, 00:15:34.397 "copy": true, 00:15:34.397 "nvme_iov_md": false 00:15:34.397 }, 00:15:34.397 "memory_domains": [ 00:15:34.397 { 00:15:34.397 "dma_device_id": "system", 00:15:34.397 "dma_device_type": 1 00:15:34.397 }, 00:15:34.397 { 00:15:34.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.397 "dma_device_type": 2 00:15:34.397 } 00:15:34.397 ], 00:15:34.397 "driver_specific": {} 00:15:34.397 } 00:15:34.397 ] 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.397 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.397 "name": "Existed_Raid", 00:15:34.397 "uuid": "1763ff01-8d58-4b8b-8fce-019583d11cdc", 00:15:34.397 "strip_size_kb": 64, 00:15:34.397 "state": "configuring", 00:15:34.397 "raid_level": "raid5f", 00:15:34.397 "superblock": true, 00:15:34.397 "num_base_bdevs": 3, 00:15:34.397 "num_base_bdevs_discovered": 1, 00:15:34.397 "num_base_bdevs_operational": 3, 00:15:34.397 "base_bdevs_list": [ 00:15:34.397 { 00:15:34.397 "name": "BaseBdev1", 00:15:34.397 "uuid": "f7fa42d1-851d-44cd-b1f8-689a93aed1a2", 00:15:34.397 "is_configured": true, 00:15:34.397 "data_offset": 2048, 00:15:34.398 "data_size": 63488 00:15:34.398 }, 00:15:34.398 { 00:15:34.398 "name": "BaseBdev2", 00:15:34.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.398 "is_configured": false, 00:15:34.398 "data_offset": 0, 00:15:34.398 "data_size": 0 00:15:34.398 }, 00:15:34.398 { 00:15:34.398 "name": "BaseBdev3", 00:15:34.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.398 "is_configured": false, 00:15:34.398 "data_offset": 0, 00:15:34.398 "data_size": 0 00:15:34.398 } 00:15:34.398 ] 00:15:34.398 }' 00:15:34.398 09:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.398 09:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.987 [2024-11-20 09:28:00.149889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.987 [2024-11-20 09:28:00.149956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.987 [2024-11-20 09:28:00.161959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.987 [2024-11-20 09:28:00.164225] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.987 [2024-11-20 09:28:00.164276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.987 [2024-11-20 09:28:00.164288] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.987 [2024-11-20 09:28:00.164299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.987 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.988 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.988 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.988 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.988 "name": "Existed_Raid", 00:15:34.988 "uuid": "f7743f6c-077c-40e6-b768-a4e8f503f548", 00:15:34.988 "strip_size_kb": 64, 00:15:34.988 "state": "configuring", 00:15:34.988 "raid_level": "raid5f", 00:15:34.988 "superblock": true, 00:15:34.988 "num_base_bdevs": 3, 00:15:34.988 "num_base_bdevs_discovered": 1, 00:15:34.988 "num_base_bdevs_operational": 3, 00:15:34.988 "base_bdevs_list": [ 00:15:34.988 { 00:15:34.988 "name": "BaseBdev1", 00:15:34.988 "uuid": "f7fa42d1-851d-44cd-b1f8-689a93aed1a2", 00:15:34.988 "is_configured": true, 00:15:34.988 "data_offset": 2048, 00:15:34.988 "data_size": 63488 00:15:34.988 }, 00:15:34.988 { 00:15:34.988 "name": "BaseBdev2", 00:15:34.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.988 "is_configured": false, 00:15:34.988 "data_offset": 0, 00:15:34.988 "data_size": 0 00:15:34.988 }, 00:15:34.988 { 00:15:34.988 "name": "BaseBdev3", 00:15:34.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.988 "is_configured": false, 00:15:34.988 "data_offset": 0, 00:15:34.988 "data_size": 0 00:15:34.988 } 00:15:34.988 ] 00:15:34.988 }' 00:15:34.988 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.988 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.247 [2024-11-20 09:28:00.625961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.247 BaseBdev2 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.247 [ 00:15:35.247 { 00:15:35.247 "name": "BaseBdev2", 00:15:35.247 "aliases": [ 00:15:35.247 "9f40e689-d054-446d-a992-de3b4f3309d9" 00:15:35.247 ], 00:15:35.247 "product_name": "Malloc disk", 00:15:35.247 "block_size": 512, 00:15:35.247 "num_blocks": 65536, 00:15:35.247 "uuid": "9f40e689-d054-446d-a992-de3b4f3309d9", 00:15:35.247 "assigned_rate_limits": { 00:15:35.247 "rw_ios_per_sec": 0, 00:15:35.247 "rw_mbytes_per_sec": 0, 00:15:35.247 "r_mbytes_per_sec": 0, 00:15:35.247 "w_mbytes_per_sec": 0 00:15:35.247 }, 00:15:35.247 "claimed": true, 00:15:35.247 "claim_type": "exclusive_write", 00:15:35.247 "zoned": false, 00:15:35.247 "supported_io_types": { 00:15:35.247 "read": true, 00:15:35.247 "write": true, 00:15:35.247 "unmap": true, 00:15:35.247 "flush": true, 00:15:35.247 "reset": true, 00:15:35.247 "nvme_admin": false, 00:15:35.247 "nvme_io": false, 00:15:35.247 "nvme_io_md": false, 00:15:35.247 "write_zeroes": true, 00:15:35.247 "zcopy": true, 00:15:35.247 "get_zone_info": false, 00:15:35.247 "zone_management": false, 00:15:35.247 "zone_append": false, 00:15:35.247 "compare": false, 00:15:35.247 "compare_and_write": false, 00:15:35.247 "abort": true, 00:15:35.247 "seek_hole": false, 00:15:35.247 "seek_data": false, 00:15:35.247 "copy": true, 00:15:35.247 "nvme_iov_md": false 00:15:35.247 }, 00:15:35.247 "memory_domains": [ 00:15:35.247 { 00:15:35.247 "dma_device_id": "system", 00:15:35.247 "dma_device_type": 1 00:15:35.247 }, 00:15:35.247 { 00:15:35.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.247 "dma_device_type": 2 00:15:35.247 } 00:15:35.247 ], 00:15:35.247 "driver_specific": {} 00:15:35.247 } 00:15:35.247 ] 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.247 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.248 "name": "Existed_Raid", 00:15:35.248 "uuid": "f7743f6c-077c-40e6-b768-a4e8f503f548", 00:15:35.248 "strip_size_kb": 64, 00:15:35.248 "state": "configuring", 00:15:35.248 "raid_level": "raid5f", 00:15:35.248 "superblock": true, 00:15:35.248 "num_base_bdevs": 3, 00:15:35.248 "num_base_bdevs_discovered": 2, 00:15:35.248 "num_base_bdevs_operational": 3, 00:15:35.248 "base_bdevs_list": [ 00:15:35.248 { 00:15:35.248 "name": "BaseBdev1", 00:15:35.248 "uuid": "f7fa42d1-851d-44cd-b1f8-689a93aed1a2", 00:15:35.248 "is_configured": true, 00:15:35.248 "data_offset": 2048, 00:15:35.248 "data_size": 63488 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "name": "BaseBdev2", 00:15:35.248 "uuid": "9f40e689-d054-446d-a992-de3b4f3309d9", 00:15:35.248 "is_configured": true, 00:15:35.248 "data_offset": 2048, 00:15:35.248 "data_size": 63488 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "name": "BaseBdev3", 00:15:35.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.248 "is_configured": false, 00:15:35.248 "data_offset": 0, 00:15:35.248 "data_size": 0 00:15:35.248 } 00:15:35.248 ] 00:15:35.248 }' 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.248 09:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.815 [2024-11-20 09:28:01.138215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.815 [2024-11-20 09:28:01.138544] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:35.815 [2024-11-20 09:28:01.138576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:35.815 BaseBdev3 00:15:35.815 [2024-11-20 09:28:01.138883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.815 [2024-11-20 09:28:01.145520] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:35.815 [2024-11-20 09:28:01.145548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:35.815 [2024-11-20 09:28:01.145890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.815 [ 00:15:35.815 { 00:15:35.815 "name": "BaseBdev3", 00:15:35.815 "aliases": [ 00:15:35.815 "6f89a256-a613-4b54-9dc6-b1dc619b1a07" 00:15:35.815 ], 00:15:35.815 "product_name": "Malloc disk", 00:15:35.815 "block_size": 512, 00:15:35.815 "num_blocks": 65536, 00:15:35.815 "uuid": "6f89a256-a613-4b54-9dc6-b1dc619b1a07", 00:15:35.815 "assigned_rate_limits": { 00:15:35.815 "rw_ios_per_sec": 0, 00:15:35.815 "rw_mbytes_per_sec": 0, 00:15:35.815 "r_mbytes_per_sec": 0, 00:15:35.815 "w_mbytes_per_sec": 0 00:15:35.815 }, 00:15:35.815 "claimed": true, 00:15:35.815 "claim_type": "exclusive_write", 00:15:35.815 "zoned": false, 00:15:35.815 "supported_io_types": { 00:15:35.815 "read": true, 00:15:35.815 "write": true, 00:15:35.815 "unmap": true, 00:15:35.815 "flush": true, 00:15:35.815 "reset": true, 00:15:35.815 "nvme_admin": false, 00:15:35.815 "nvme_io": false, 00:15:35.815 "nvme_io_md": false, 00:15:35.815 "write_zeroes": true, 00:15:35.815 "zcopy": true, 00:15:35.815 "get_zone_info": false, 00:15:35.815 "zone_management": false, 00:15:35.815 "zone_append": false, 00:15:35.815 "compare": false, 00:15:35.815 "compare_and_write": false, 00:15:35.815 "abort": true, 00:15:35.815 "seek_hole": false, 00:15:35.815 "seek_data": false, 00:15:35.815 "copy": true, 00:15:35.815 "nvme_iov_md": false 00:15:35.815 }, 00:15:35.815 "memory_domains": [ 00:15:35.815 { 00:15:35.815 "dma_device_id": "system", 00:15:35.815 "dma_device_type": 1 00:15:35.815 }, 00:15:35.815 { 00:15:35.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.815 "dma_device_type": 2 00:15:35.815 } 00:15:35.815 ], 00:15:35.815 "driver_specific": {} 00:15:35.815 } 00:15:35.815 ] 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.815 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.815 "name": "Existed_Raid", 00:15:35.815 "uuid": "f7743f6c-077c-40e6-b768-a4e8f503f548", 00:15:35.815 "strip_size_kb": 64, 00:15:35.815 "state": "online", 00:15:35.816 "raid_level": "raid5f", 00:15:35.816 "superblock": true, 00:15:35.816 "num_base_bdevs": 3, 00:15:35.816 "num_base_bdevs_discovered": 3, 00:15:35.816 "num_base_bdevs_operational": 3, 00:15:35.816 "base_bdevs_list": [ 00:15:35.816 { 00:15:35.816 "name": "BaseBdev1", 00:15:35.816 "uuid": "f7fa42d1-851d-44cd-b1f8-689a93aed1a2", 00:15:35.816 "is_configured": true, 00:15:35.816 "data_offset": 2048, 00:15:35.816 "data_size": 63488 00:15:35.816 }, 00:15:35.816 { 00:15:35.816 "name": "BaseBdev2", 00:15:35.816 "uuid": "9f40e689-d054-446d-a992-de3b4f3309d9", 00:15:35.816 "is_configured": true, 00:15:35.816 "data_offset": 2048, 00:15:35.816 "data_size": 63488 00:15:35.816 }, 00:15:35.816 { 00:15:35.816 "name": "BaseBdev3", 00:15:35.816 "uuid": "6f89a256-a613-4b54-9dc6-b1dc619b1a07", 00:15:35.816 "is_configured": true, 00:15:35.816 "data_offset": 2048, 00:15:35.816 "data_size": 63488 00:15:35.816 } 00:15:35.816 ] 00:15:35.816 }' 00:15:35.816 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.816 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.384 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.384 [2024-11-20 09:28:01.600834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.385 "name": "Existed_Raid", 00:15:36.385 "aliases": [ 00:15:36.385 "f7743f6c-077c-40e6-b768-a4e8f503f548" 00:15:36.385 ], 00:15:36.385 "product_name": "Raid Volume", 00:15:36.385 "block_size": 512, 00:15:36.385 "num_blocks": 126976, 00:15:36.385 "uuid": "f7743f6c-077c-40e6-b768-a4e8f503f548", 00:15:36.385 "assigned_rate_limits": { 00:15:36.385 "rw_ios_per_sec": 0, 00:15:36.385 "rw_mbytes_per_sec": 0, 00:15:36.385 "r_mbytes_per_sec": 0, 00:15:36.385 "w_mbytes_per_sec": 0 00:15:36.385 }, 00:15:36.385 "claimed": false, 00:15:36.385 "zoned": false, 00:15:36.385 "supported_io_types": { 00:15:36.385 "read": true, 00:15:36.385 "write": true, 00:15:36.385 "unmap": false, 00:15:36.385 "flush": false, 00:15:36.385 "reset": true, 00:15:36.385 "nvme_admin": false, 00:15:36.385 "nvme_io": false, 00:15:36.385 "nvme_io_md": false, 00:15:36.385 "write_zeroes": true, 00:15:36.385 "zcopy": false, 00:15:36.385 "get_zone_info": false, 00:15:36.385 "zone_management": false, 00:15:36.385 "zone_append": false, 00:15:36.385 "compare": false, 00:15:36.385 "compare_and_write": false, 00:15:36.385 "abort": false, 00:15:36.385 "seek_hole": false, 00:15:36.385 "seek_data": false, 00:15:36.385 "copy": false, 00:15:36.385 "nvme_iov_md": false 00:15:36.385 }, 00:15:36.385 "driver_specific": { 00:15:36.385 "raid": { 00:15:36.385 "uuid": "f7743f6c-077c-40e6-b768-a4e8f503f548", 00:15:36.385 "strip_size_kb": 64, 00:15:36.385 "state": "online", 00:15:36.385 "raid_level": "raid5f", 00:15:36.385 "superblock": true, 00:15:36.385 "num_base_bdevs": 3, 00:15:36.385 "num_base_bdevs_discovered": 3, 00:15:36.385 "num_base_bdevs_operational": 3, 00:15:36.385 "base_bdevs_list": [ 00:15:36.385 { 00:15:36.385 "name": "BaseBdev1", 00:15:36.385 "uuid": "f7fa42d1-851d-44cd-b1f8-689a93aed1a2", 00:15:36.385 "is_configured": true, 00:15:36.385 "data_offset": 2048, 00:15:36.385 "data_size": 63488 00:15:36.385 }, 00:15:36.385 { 00:15:36.385 "name": "BaseBdev2", 00:15:36.385 "uuid": "9f40e689-d054-446d-a992-de3b4f3309d9", 00:15:36.385 "is_configured": true, 00:15:36.385 "data_offset": 2048, 00:15:36.385 "data_size": 63488 00:15:36.385 }, 00:15:36.385 { 00:15:36.385 "name": "BaseBdev3", 00:15:36.385 "uuid": "6f89a256-a613-4b54-9dc6-b1dc619b1a07", 00:15:36.385 "is_configured": true, 00:15:36.385 "data_offset": 2048, 00:15:36.385 "data_size": 63488 00:15:36.385 } 00:15:36.385 ] 00:15:36.385 } 00:15:36.385 } 00:15:36.385 }' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:36.385 BaseBdev2 00:15:36.385 BaseBdev3' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.645 [2024-11-20 09:28:01.864244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.645 09:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.645 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.645 "name": "Existed_Raid", 00:15:36.645 "uuid": "f7743f6c-077c-40e6-b768-a4e8f503f548", 00:15:36.645 "strip_size_kb": 64, 00:15:36.645 "state": "online", 00:15:36.645 "raid_level": "raid5f", 00:15:36.645 "superblock": true, 00:15:36.645 "num_base_bdevs": 3, 00:15:36.645 "num_base_bdevs_discovered": 2, 00:15:36.645 "num_base_bdevs_operational": 2, 00:15:36.645 "base_bdevs_list": [ 00:15:36.645 { 00:15:36.645 "name": null, 00:15:36.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.645 "is_configured": false, 00:15:36.645 "data_offset": 0, 00:15:36.645 "data_size": 63488 00:15:36.645 }, 00:15:36.645 { 00:15:36.645 "name": "BaseBdev2", 00:15:36.645 "uuid": "9f40e689-d054-446d-a992-de3b4f3309d9", 00:15:36.645 "is_configured": true, 00:15:36.645 "data_offset": 2048, 00:15:36.645 "data_size": 63488 00:15:36.645 }, 00:15:36.645 { 00:15:36.645 "name": "BaseBdev3", 00:15:36.645 "uuid": "6f89a256-a613-4b54-9dc6-b1dc619b1a07", 00:15:36.645 "is_configured": true, 00:15:36.645 "data_offset": 2048, 00:15:36.645 "data_size": 63488 00:15:36.645 } 00:15:36.645 ] 00:15:36.645 }' 00:15:36.645 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.645 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.214 [2024-11-20 09:28:02.429482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.214 [2024-11-20 09:28:02.429652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.214 [2024-11-20 09:28:02.547496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.214 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.214 [2024-11-20 09:28:02.599402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.214 [2024-11-20 09:28:02.599480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.474 BaseBdev2 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.474 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.475 [ 00:15:37.475 { 00:15:37.475 "name": "BaseBdev2", 00:15:37.475 "aliases": [ 00:15:37.475 "ee683208-a1f6-4f22-a727-038748c93aae" 00:15:37.475 ], 00:15:37.475 "product_name": "Malloc disk", 00:15:37.475 "block_size": 512, 00:15:37.475 "num_blocks": 65536, 00:15:37.475 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:37.475 "assigned_rate_limits": { 00:15:37.475 "rw_ios_per_sec": 0, 00:15:37.475 "rw_mbytes_per_sec": 0, 00:15:37.475 "r_mbytes_per_sec": 0, 00:15:37.475 "w_mbytes_per_sec": 0 00:15:37.475 }, 00:15:37.475 "claimed": false, 00:15:37.475 "zoned": false, 00:15:37.475 "supported_io_types": { 00:15:37.475 "read": true, 00:15:37.475 "write": true, 00:15:37.475 "unmap": true, 00:15:37.475 "flush": true, 00:15:37.475 "reset": true, 00:15:37.475 "nvme_admin": false, 00:15:37.475 "nvme_io": false, 00:15:37.475 "nvme_io_md": false, 00:15:37.475 "write_zeroes": true, 00:15:37.475 "zcopy": true, 00:15:37.475 "get_zone_info": false, 00:15:37.475 "zone_management": false, 00:15:37.475 "zone_append": false, 00:15:37.475 "compare": false, 00:15:37.475 "compare_and_write": false, 00:15:37.475 "abort": true, 00:15:37.475 "seek_hole": false, 00:15:37.475 "seek_data": false, 00:15:37.475 "copy": true, 00:15:37.475 "nvme_iov_md": false 00:15:37.475 }, 00:15:37.475 "memory_domains": [ 00:15:37.475 { 00:15:37.475 "dma_device_id": "system", 00:15:37.475 "dma_device_type": 1 00:15:37.475 }, 00:15:37.475 { 00:15:37.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.475 "dma_device_type": 2 00:15:37.475 } 00:15:37.475 ], 00:15:37.475 "driver_specific": {} 00:15:37.475 } 00:15:37.475 ] 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.475 BaseBdev3 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.475 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.734 [ 00:15:37.734 { 00:15:37.734 "name": "BaseBdev3", 00:15:37.734 "aliases": [ 00:15:37.734 "e6f3ee91-52ab-4de9-873b-f6d3224f3605" 00:15:37.734 ], 00:15:37.734 "product_name": "Malloc disk", 00:15:37.734 "block_size": 512, 00:15:37.734 "num_blocks": 65536, 00:15:37.734 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:37.734 "assigned_rate_limits": { 00:15:37.734 "rw_ios_per_sec": 0, 00:15:37.734 "rw_mbytes_per_sec": 0, 00:15:37.734 "r_mbytes_per_sec": 0, 00:15:37.734 "w_mbytes_per_sec": 0 00:15:37.734 }, 00:15:37.734 "claimed": false, 00:15:37.734 "zoned": false, 00:15:37.734 "supported_io_types": { 00:15:37.734 "read": true, 00:15:37.734 "write": true, 00:15:37.734 "unmap": true, 00:15:37.734 "flush": true, 00:15:37.734 "reset": true, 00:15:37.734 "nvme_admin": false, 00:15:37.734 "nvme_io": false, 00:15:37.734 "nvme_io_md": false, 00:15:37.734 "write_zeroes": true, 00:15:37.734 "zcopy": true, 00:15:37.734 "get_zone_info": false, 00:15:37.735 "zone_management": false, 00:15:37.735 "zone_append": false, 00:15:37.735 "compare": false, 00:15:37.735 "compare_and_write": false, 00:15:37.735 "abort": true, 00:15:37.735 "seek_hole": false, 00:15:37.735 "seek_data": false, 00:15:37.735 "copy": true, 00:15:37.735 "nvme_iov_md": false 00:15:37.735 }, 00:15:37.735 "memory_domains": [ 00:15:37.735 { 00:15:37.735 "dma_device_id": "system", 00:15:37.735 "dma_device_type": 1 00:15:37.735 }, 00:15:37.735 { 00:15:37.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.735 "dma_device_type": 2 00:15:37.735 } 00:15:37.735 ], 00:15:37.735 "driver_specific": {} 00:15:37.735 } 00:15:37.735 ] 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.735 [2024-11-20 09:28:02.956362] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.735 [2024-11-20 09:28:02.956415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.735 [2024-11-20 09:28:02.956458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.735 [2024-11-20 09:28:02.958583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.735 09:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.735 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.735 "name": "Existed_Raid", 00:15:37.735 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:37.735 "strip_size_kb": 64, 00:15:37.735 "state": "configuring", 00:15:37.735 "raid_level": "raid5f", 00:15:37.735 "superblock": true, 00:15:37.735 "num_base_bdevs": 3, 00:15:37.735 "num_base_bdevs_discovered": 2, 00:15:37.735 "num_base_bdevs_operational": 3, 00:15:37.735 "base_bdevs_list": [ 00:15:37.735 { 00:15:37.735 "name": "BaseBdev1", 00:15:37.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.735 "is_configured": false, 00:15:37.735 "data_offset": 0, 00:15:37.735 "data_size": 0 00:15:37.735 }, 00:15:37.735 { 00:15:37.735 "name": "BaseBdev2", 00:15:37.735 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:37.735 "is_configured": true, 00:15:37.735 "data_offset": 2048, 00:15:37.735 "data_size": 63488 00:15:37.735 }, 00:15:37.735 { 00:15:37.735 "name": "BaseBdev3", 00:15:37.735 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:37.735 "is_configured": true, 00:15:37.735 "data_offset": 2048, 00:15:37.735 "data_size": 63488 00:15:37.735 } 00:15:37.735 ] 00:15:37.735 }' 00:15:37.735 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.735 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.994 [2024-11-20 09:28:03.419626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.994 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.995 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.995 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.995 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.254 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.254 "name": "Existed_Raid", 00:15:38.254 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:38.255 "strip_size_kb": 64, 00:15:38.255 "state": "configuring", 00:15:38.255 "raid_level": "raid5f", 00:15:38.255 "superblock": true, 00:15:38.255 "num_base_bdevs": 3, 00:15:38.255 "num_base_bdevs_discovered": 1, 00:15:38.255 "num_base_bdevs_operational": 3, 00:15:38.255 "base_bdevs_list": [ 00:15:38.255 { 00:15:38.255 "name": "BaseBdev1", 00:15:38.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.255 "is_configured": false, 00:15:38.255 "data_offset": 0, 00:15:38.255 "data_size": 0 00:15:38.255 }, 00:15:38.255 { 00:15:38.255 "name": null, 00:15:38.255 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:38.255 "is_configured": false, 00:15:38.255 "data_offset": 0, 00:15:38.255 "data_size": 63488 00:15:38.255 }, 00:15:38.255 { 00:15:38.255 "name": "BaseBdev3", 00:15:38.255 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:38.255 "is_configured": true, 00:15:38.255 "data_offset": 2048, 00:15:38.255 "data_size": 63488 00:15:38.255 } 00:15:38.255 ] 00:15:38.255 }' 00:15:38.255 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.255 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.515 [2024-11-20 09:28:03.946868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.515 BaseBdev1 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.515 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.775 [ 00:15:38.775 { 00:15:38.775 "name": "BaseBdev1", 00:15:38.775 "aliases": [ 00:15:38.775 "7cc90c0a-38fe-4579-9aef-f79f8fec8351" 00:15:38.775 ], 00:15:38.775 "product_name": "Malloc disk", 00:15:38.775 "block_size": 512, 00:15:38.775 "num_blocks": 65536, 00:15:38.775 "uuid": "7cc90c0a-38fe-4579-9aef-f79f8fec8351", 00:15:38.775 "assigned_rate_limits": { 00:15:38.775 "rw_ios_per_sec": 0, 00:15:38.775 "rw_mbytes_per_sec": 0, 00:15:38.775 "r_mbytes_per_sec": 0, 00:15:38.775 "w_mbytes_per_sec": 0 00:15:38.775 }, 00:15:38.775 "claimed": true, 00:15:38.775 "claim_type": "exclusive_write", 00:15:38.775 "zoned": false, 00:15:38.775 "supported_io_types": { 00:15:38.775 "read": true, 00:15:38.775 "write": true, 00:15:38.775 "unmap": true, 00:15:38.775 "flush": true, 00:15:38.775 "reset": true, 00:15:38.775 "nvme_admin": false, 00:15:38.775 "nvme_io": false, 00:15:38.775 "nvme_io_md": false, 00:15:38.775 "write_zeroes": true, 00:15:38.775 "zcopy": true, 00:15:38.775 "get_zone_info": false, 00:15:38.775 "zone_management": false, 00:15:38.775 "zone_append": false, 00:15:38.775 "compare": false, 00:15:38.775 "compare_and_write": false, 00:15:38.775 "abort": true, 00:15:38.775 "seek_hole": false, 00:15:38.775 "seek_data": false, 00:15:38.775 "copy": true, 00:15:38.775 "nvme_iov_md": false 00:15:38.775 }, 00:15:38.775 "memory_domains": [ 00:15:38.775 { 00:15:38.775 "dma_device_id": "system", 00:15:38.775 "dma_device_type": 1 00:15:38.775 }, 00:15:38.775 { 00:15:38.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.775 "dma_device_type": 2 00:15:38.775 } 00:15:38.775 ], 00:15:38.775 "driver_specific": {} 00:15:38.775 } 00:15:38.775 ] 00:15:38.775 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.775 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.775 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.776 09:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.776 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.776 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.776 "name": "Existed_Raid", 00:15:38.776 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:38.776 "strip_size_kb": 64, 00:15:38.776 "state": "configuring", 00:15:38.776 "raid_level": "raid5f", 00:15:38.776 "superblock": true, 00:15:38.776 "num_base_bdevs": 3, 00:15:38.776 "num_base_bdevs_discovered": 2, 00:15:38.776 "num_base_bdevs_operational": 3, 00:15:38.776 "base_bdevs_list": [ 00:15:38.776 { 00:15:38.776 "name": "BaseBdev1", 00:15:38.776 "uuid": "7cc90c0a-38fe-4579-9aef-f79f8fec8351", 00:15:38.776 "is_configured": true, 00:15:38.776 "data_offset": 2048, 00:15:38.776 "data_size": 63488 00:15:38.776 }, 00:15:38.776 { 00:15:38.776 "name": null, 00:15:38.776 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:38.776 "is_configured": false, 00:15:38.776 "data_offset": 0, 00:15:38.776 "data_size": 63488 00:15:38.776 }, 00:15:38.776 { 00:15:38.776 "name": "BaseBdev3", 00:15:38.776 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:38.776 "is_configured": true, 00:15:38.776 "data_offset": 2048, 00:15:38.776 "data_size": 63488 00:15:38.776 } 00:15:38.776 ] 00:15:38.776 }' 00:15:38.776 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.776 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.035 [2024-11-20 09:28:04.430136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.035 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.036 "name": "Existed_Raid", 00:15:39.036 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:39.036 "strip_size_kb": 64, 00:15:39.036 "state": "configuring", 00:15:39.036 "raid_level": "raid5f", 00:15:39.036 "superblock": true, 00:15:39.036 "num_base_bdevs": 3, 00:15:39.036 "num_base_bdevs_discovered": 1, 00:15:39.036 "num_base_bdevs_operational": 3, 00:15:39.036 "base_bdevs_list": [ 00:15:39.036 { 00:15:39.036 "name": "BaseBdev1", 00:15:39.036 "uuid": "7cc90c0a-38fe-4579-9aef-f79f8fec8351", 00:15:39.036 "is_configured": true, 00:15:39.036 "data_offset": 2048, 00:15:39.036 "data_size": 63488 00:15:39.036 }, 00:15:39.036 { 00:15:39.036 "name": null, 00:15:39.036 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:39.036 "is_configured": false, 00:15:39.036 "data_offset": 0, 00:15:39.036 "data_size": 63488 00:15:39.036 }, 00:15:39.036 { 00:15:39.036 "name": null, 00:15:39.036 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:39.036 "is_configured": false, 00:15:39.036 "data_offset": 0, 00:15:39.036 "data_size": 63488 00:15:39.036 } 00:15:39.036 ] 00:15:39.036 }' 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.036 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.604 [2024-11-20 09:28:04.873452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.604 "name": "Existed_Raid", 00:15:39.604 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:39.604 "strip_size_kb": 64, 00:15:39.604 "state": "configuring", 00:15:39.604 "raid_level": "raid5f", 00:15:39.604 "superblock": true, 00:15:39.604 "num_base_bdevs": 3, 00:15:39.604 "num_base_bdevs_discovered": 2, 00:15:39.604 "num_base_bdevs_operational": 3, 00:15:39.604 "base_bdevs_list": [ 00:15:39.604 { 00:15:39.604 "name": "BaseBdev1", 00:15:39.604 "uuid": "7cc90c0a-38fe-4579-9aef-f79f8fec8351", 00:15:39.604 "is_configured": true, 00:15:39.604 "data_offset": 2048, 00:15:39.604 "data_size": 63488 00:15:39.604 }, 00:15:39.604 { 00:15:39.604 "name": null, 00:15:39.604 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:39.604 "is_configured": false, 00:15:39.604 "data_offset": 0, 00:15:39.604 "data_size": 63488 00:15:39.604 }, 00:15:39.604 { 00:15:39.604 "name": "BaseBdev3", 00:15:39.604 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:39.604 "is_configured": true, 00:15:39.604 "data_offset": 2048, 00:15:39.604 "data_size": 63488 00:15:39.604 } 00:15:39.604 ] 00:15:39.604 }' 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.604 09:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.864 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.864 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.864 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.864 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.123 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.124 [2024-11-20 09:28:05.364649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.124 "name": "Existed_Raid", 00:15:40.124 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:40.124 "strip_size_kb": 64, 00:15:40.124 "state": "configuring", 00:15:40.124 "raid_level": "raid5f", 00:15:40.124 "superblock": true, 00:15:40.124 "num_base_bdevs": 3, 00:15:40.124 "num_base_bdevs_discovered": 1, 00:15:40.124 "num_base_bdevs_operational": 3, 00:15:40.124 "base_bdevs_list": [ 00:15:40.124 { 00:15:40.124 "name": null, 00:15:40.124 "uuid": "7cc90c0a-38fe-4579-9aef-f79f8fec8351", 00:15:40.124 "is_configured": false, 00:15:40.124 "data_offset": 0, 00:15:40.124 "data_size": 63488 00:15:40.124 }, 00:15:40.124 { 00:15:40.124 "name": null, 00:15:40.124 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:40.124 "is_configured": false, 00:15:40.124 "data_offset": 0, 00:15:40.124 "data_size": 63488 00:15:40.124 }, 00:15:40.124 { 00:15:40.124 "name": "BaseBdev3", 00:15:40.124 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:40.124 "is_configured": true, 00:15:40.124 "data_offset": 2048, 00:15:40.124 "data_size": 63488 00:15:40.124 } 00:15:40.124 ] 00:15:40.124 }' 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.124 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.693 [2024-11-20 09:28:05.994290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.693 09:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.693 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.693 "name": "Existed_Raid", 00:15:40.693 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:40.693 "strip_size_kb": 64, 00:15:40.693 "state": "configuring", 00:15:40.693 "raid_level": "raid5f", 00:15:40.693 "superblock": true, 00:15:40.693 "num_base_bdevs": 3, 00:15:40.693 "num_base_bdevs_discovered": 2, 00:15:40.693 "num_base_bdevs_operational": 3, 00:15:40.693 "base_bdevs_list": [ 00:15:40.693 { 00:15:40.693 "name": null, 00:15:40.693 "uuid": "7cc90c0a-38fe-4579-9aef-f79f8fec8351", 00:15:40.693 "is_configured": false, 00:15:40.693 "data_offset": 0, 00:15:40.693 "data_size": 63488 00:15:40.693 }, 00:15:40.693 { 00:15:40.693 "name": "BaseBdev2", 00:15:40.693 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:40.693 "is_configured": true, 00:15:40.693 "data_offset": 2048, 00:15:40.693 "data_size": 63488 00:15:40.693 }, 00:15:40.693 { 00:15:40.693 "name": "BaseBdev3", 00:15:40.694 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:40.694 "is_configured": true, 00:15:40.694 "data_offset": 2048, 00:15:40.694 "data_size": 63488 00:15:40.694 } 00:15:40.694 ] 00:15:40.694 }' 00:15:40.694 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.694 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7cc90c0a-38fe-4579-9aef-f79f8fec8351 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.262 [2024-11-20 09:28:06.569082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:41.262 [2024-11-20 09:28:06.569353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:41.262 [2024-11-20 09:28:06.569379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:41.262 NewBaseBdev 00:15:41.262 [2024-11-20 09:28:06.569687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.262 [2024-11-20 09:28:06.576395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:41.262 [2024-11-20 09:28:06.576423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:41.262 [2024-11-20 09:28:06.576754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.262 [ 00:15:41.262 { 00:15:41.262 "name": "NewBaseBdev", 00:15:41.262 "aliases": [ 00:15:41.262 "7cc90c0a-38fe-4579-9aef-f79f8fec8351" 00:15:41.262 ], 00:15:41.262 "product_name": "Malloc disk", 00:15:41.262 "block_size": 512, 00:15:41.262 "num_blocks": 65536, 00:15:41.262 "uuid": "7cc90c0a-38fe-4579-9aef-f79f8fec8351", 00:15:41.262 "assigned_rate_limits": { 00:15:41.262 "rw_ios_per_sec": 0, 00:15:41.262 "rw_mbytes_per_sec": 0, 00:15:41.262 "r_mbytes_per_sec": 0, 00:15:41.262 "w_mbytes_per_sec": 0 00:15:41.262 }, 00:15:41.262 "claimed": true, 00:15:41.262 "claim_type": "exclusive_write", 00:15:41.262 "zoned": false, 00:15:41.262 "supported_io_types": { 00:15:41.262 "read": true, 00:15:41.262 "write": true, 00:15:41.262 "unmap": true, 00:15:41.262 "flush": true, 00:15:41.262 "reset": true, 00:15:41.262 "nvme_admin": false, 00:15:41.262 "nvme_io": false, 00:15:41.262 "nvme_io_md": false, 00:15:41.262 "write_zeroes": true, 00:15:41.262 "zcopy": true, 00:15:41.262 "get_zone_info": false, 00:15:41.262 "zone_management": false, 00:15:41.262 "zone_append": false, 00:15:41.262 "compare": false, 00:15:41.262 "compare_and_write": false, 00:15:41.262 "abort": true, 00:15:41.262 "seek_hole": false, 00:15:41.262 "seek_data": false, 00:15:41.262 "copy": true, 00:15:41.262 "nvme_iov_md": false 00:15:41.262 }, 00:15:41.262 "memory_domains": [ 00:15:41.262 { 00:15:41.262 "dma_device_id": "system", 00:15:41.262 "dma_device_type": 1 00:15:41.262 }, 00:15:41.262 { 00:15:41.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.262 "dma_device_type": 2 00:15:41.262 } 00:15:41.262 ], 00:15:41.262 "driver_specific": {} 00:15:41.262 } 00:15:41.262 ] 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.262 "name": "Existed_Raid", 00:15:41.262 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:41.262 "strip_size_kb": 64, 00:15:41.262 "state": "online", 00:15:41.262 "raid_level": "raid5f", 00:15:41.262 "superblock": true, 00:15:41.262 "num_base_bdevs": 3, 00:15:41.262 "num_base_bdevs_discovered": 3, 00:15:41.262 "num_base_bdevs_operational": 3, 00:15:41.262 "base_bdevs_list": [ 00:15:41.262 { 00:15:41.262 "name": "NewBaseBdev", 00:15:41.262 "uuid": "7cc90c0a-38fe-4579-9aef-f79f8fec8351", 00:15:41.262 "is_configured": true, 00:15:41.262 "data_offset": 2048, 00:15:41.262 "data_size": 63488 00:15:41.262 }, 00:15:41.262 { 00:15:41.262 "name": "BaseBdev2", 00:15:41.262 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:41.262 "is_configured": true, 00:15:41.262 "data_offset": 2048, 00:15:41.262 "data_size": 63488 00:15:41.262 }, 00:15:41.262 { 00:15:41.262 "name": "BaseBdev3", 00:15:41.262 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:41.262 "is_configured": true, 00:15:41.262 "data_offset": 2048, 00:15:41.262 "data_size": 63488 00:15:41.262 } 00:15:41.262 ] 00:15:41.262 }' 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.262 09:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:41.832 [2024-11-20 09:28:07.031844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.832 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:41.832 "name": "Existed_Raid", 00:15:41.832 "aliases": [ 00:15:41.832 "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d" 00:15:41.832 ], 00:15:41.832 "product_name": "Raid Volume", 00:15:41.832 "block_size": 512, 00:15:41.832 "num_blocks": 126976, 00:15:41.832 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:41.832 "assigned_rate_limits": { 00:15:41.832 "rw_ios_per_sec": 0, 00:15:41.832 "rw_mbytes_per_sec": 0, 00:15:41.832 "r_mbytes_per_sec": 0, 00:15:41.832 "w_mbytes_per_sec": 0 00:15:41.832 }, 00:15:41.832 "claimed": false, 00:15:41.832 "zoned": false, 00:15:41.832 "supported_io_types": { 00:15:41.832 "read": true, 00:15:41.832 "write": true, 00:15:41.832 "unmap": false, 00:15:41.832 "flush": false, 00:15:41.832 "reset": true, 00:15:41.832 "nvme_admin": false, 00:15:41.832 "nvme_io": false, 00:15:41.832 "nvme_io_md": false, 00:15:41.832 "write_zeroes": true, 00:15:41.832 "zcopy": false, 00:15:41.832 "get_zone_info": false, 00:15:41.832 "zone_management": false, 00:15:41.832 "zone_append": false, 00:15:41.832 "compare": false, 00:15:41.832 "compare_and_write": false, 00:15:41.832 "abort": false, 00:15:41.832 "seek_hole": false, 00:15:41.832 "seek_data": false, 00:15:41.832 "copy": false, 00:15:41.832 "nvme_iov_md": false 00:15:41.832 }, 00:15:41.832 "driver_specific": { 00:15:41.833 "raid": { 00:15:41.833 "uuid": "9ac44ea1-8bcc-4f67-aa75-0f3d5c40a70d", 00:15:41.833 "strip_size_kb": 64, 00:15:41.833 "state": "online", 00:15:41.833 "raid_level": "raid5f", 00:15:41.833 "superblock": true, 00:15:41.833 "num_base_bdevs": 3, 00:15:41.833 "num_base_bdevs_discovered": 3, 00:15:41.833 "num_base_bdevs_operational": 3, 00:15:41.833 "base_bdevs_list": [ 00:15:41.833 { 00:15:41.833 "name": "NewBaseBdev", 00:15:41.833 "uuid": "7cc90c0a-38fe-4579-9aef-f79f8fec8351", 00:15:41.833 "is_configured": true, 00:15:41.833 "data_offset": 2048, 00:15:41.833 "data_size": 63488 00:15:41.833 }, 00:15:41.833 { 00:15:41.833 "name": "BaseBdev2", 00:15:41.833 "uuid": "ee683208-a1f6-4f22-a727-038748c93aae", 00:15:41.833 "is_configured": true, 00:15:41.833 "data_offset": 2048, 00:15:41.833 "data_size": 63488 00:15:41.833 }, 00:15:41.833 { 00:15:41.833 "name": "BaseBdev3", 00:15:41.833 "uuid": "e6f3ee91-52ab-4de9-873b-f6d3224f3605", 00:15:41.833 "is_configured": true, 00:15:41.833 "data_offset": 2048, 00:15:41.833 "data_size": 63488 00:15:41.833 } 00:15:41.833 ] 00:15:41.833 } 00:15:41.833 } 00:15:41.833 }' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:41.833 BaseBdev2 00:15:41.833 BaseBdev3' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.833 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.093 [2024-11-20 09:28:07.315494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.093 [2024-11-20 09:28:07.315538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.093 [2024-11-20 09:28:07.315640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.093 [2024-11-20 09:28:07.315988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.093 [2024-11-20 09:28:07.316013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80908 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80908 ']' 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80908 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80908 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.093 killing process with pid 80908 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80908' 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80908 00:15:42.093 [2024-11-20 09:28:07.355498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.093 09:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80908 00:15:42.352 [2024-11-20 09:28:07.720181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.730 09:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:43.730 00:15:43.730 real 0m10.817s 00:15:43.730 user 0m16.898s 00:15:43.730 sys 0m1.904s 00:15:43.730 09:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.730 09:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.730 ************************************ 00:15:43.730 END TEST raid5f_state_function_test_sb 00:15:43.730 ************************************ 00:15:43.730 09:28:09 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:43.730 09:28:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:43.730 09:28:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.730 09:28:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.730 ************************************ 00:15:43.730 START TEST raid5f_superblock_test 00:15:43.730 ************************************ 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81533 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81533 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81533 ']' 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.730 09:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.012 [2024-11-20 09:28:09.205740] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:15:44.012 [2024-11-20 09:28:09.205899] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81533 ] 00:15:44.012 [2024-11-20 09:28:09.376028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.275 [2024-11-20 09:28:09.512862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.533 [2024-11-20 09:28:09.755548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.533 [2024-11-20 09:28:09.755724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.793 malloc1 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.793 [2024-11-20 09:28:10.179887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:44.793 [2024-11-20 09:28:10.180036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.793 [2024-11-20 09:28:10.180133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:44.793 [2024-11-20 09:28:10.180185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.793 [2024-11-20 09:28:10.182746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.793 pt1 00:15:44.793 [2024-11-20 09:28:10.182835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.793 malloc2 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.793 [2024-11-20 09:28:10.234592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:44.793 [2024-11-20 09:28:10.234716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.793 [2024-11-20 09:28:10.234791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:44.793 [2024-11-20 09:28:10.234844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.793 [2024-11-20 09:28:10.237418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.793 [2024-11-20 09:28:10.237520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:44.793 pt2 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.793 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.053 malloc3 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.053 [2024-11-20 09:28:10.307333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:45.053 [2024-11-20 09:28:10.307509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.053 [2024-11-20 09:28:10.307587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:45.053 [2024-11-20 09:28:10.307641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.053 [2024-11-20 09:28:10.310232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.053 [2024-11-20 09:28:10.310334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:45.053 pt3 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.053 [2024-11-20 09:28:10.319419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:45.053 [2024-11-20 09:28:10.321645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.053 [2024-11-20 09:28:10.321789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:45.053 [2024-11-20 09:28:10.322061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:45.053 [2024-11-20 09:28:10.322127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:45.053 [2024-11-20 09:28:10.322506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:45.053 [2024-11-20 09:28:10.329459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:45.053 [2024-11-20 09:28:10.329483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:45.053 [2024-11-20 09:28:10.329745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.053 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.054 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.054 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.054 "name": "raid_bdev1", 00:15:45.054 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:45.054 "strip_size_kb": 64, 00:15:45.054 "state": "online", 00:15:45.054 "raid_level": "raid5f", 00:15:45.054 "superblock": true, 00:15:45.054 "num_base_bdevs": 3, 00:15:45.054 "num_base_bdevs_discovered": 3, 00:15:45.054 "num_base_bdevs_operational": 3, 00:15:45.054 "base_bdevs_list": [ 00:15:45.054 { 00:15:45.054 "name": "pt1", 00:15:45.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.054 "is_configured": true, 00:15:45.054 "data_offset": 2048, 00:15:45.054 "data_size": 63488 00:15:45.054 }, 00:15:45.054 { 00:15:45.054 "name": "pt2", 00:15:45.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.054 "is_configured": true, 00:15:45.054 "data_offset": 2048, 00:15:45.054 "data_size": 63488 00:15:45.054 }, 00:15:45.054 { 00:15:45.054 "name": "pt3", 00:15:45.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.054 "is_configured": true, 00:15:45.054 "data_offset": 2048, 00:15:45.054 "data_size": 63488 00:15:45.054 } 00:15:45.054 ] 00:15:45.054 }' 00:15:45.054 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.054 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.620 [2024-11-20 09:28:10.796954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.620 "name": "raid_bdev1", 00:15:45.620 "aliases": [ 00:15:45.620 "b0d29f6e-5fde-4d41-a8e0-d36c5528b705" 00:15:45.620 ], 00:15:45.620 "product_name": "Raid Volume", 00:15:45.620 "block_size": 512, 00:15:45.620 "num_blocks": 126976, 00:15:45.620 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:45.620 "assigned_rate_limits": { 00:15:45.620 "rw_ios_per_sec": 0, 00:15:45.620 "rw_mbytes_per_sec": 0, 00:15:45.620 "r_mbytes_per_sec": 0, 00:15:45.620 "w_mbytes_per_sec": 0 00:15:45.620 }, 00:15:45.620 "claimed": false, 00:15:45.620 "zoned": false, 00:15:45.620 "supported_io_types": { 00:15:45.620 "read": true, 00:15:45.620 "write": true, 00:15:45.620 "unmap": false, 00:15:45.620 "flush": false, 00:15:45.620 "reset": true, 00:15:45.620 "nvme_admin": false, 00:15:45.620 "nvme_io": false, 00:15:45.620 "nvme_io_md": false, 00:15:45.620 "write_zeroes": true, 00:15:45.620 "zcopy": false, 00:15:45.620 "get_zone_info": false, 00:15:45.620 "zone_management": false, 00:15:45.620 "zone_append": false, 00:15:45.620 "compare": false, 00:15:45.620 "compare_and_write": false, 00:15:45.620 "abort": false, 00:15:45.620 "seek_hole": false, 00:15:45.620 "seek_data": false, 00:15:45.620 "copy": false, 00:15:45.620 "nvme_iov_md": false 00:15:45.620 }, 00:15:45.620 "driver_specific": { 00:15:45.620 "raid": { 00:15:45.620 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:45.620 "strip_size_kb": 64, 00:15:45.620 "state": "online", 00:15:45.620 "raid_level": "raid5f", 00:15:45.620 "superblock": true, 00:15:45.620 "num_base_bdevs": 3, 00:15:45.620 "num_base_bdevs_discovered": 3, 00:15:45.620 "num_base_bdevs_operational": 3, 00:15:45.620 "base_bdevs_list": [ 00:15:45.620 { 00:15:45.620 "name": "pt1", 00:15:45.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.620 "is_configured": true, 00:15:45.620 "data_offset": 2048, 00:15:45.620 "data_size": 63488 00:15:45.620 }, 00:15:45.620 { 00:15:45.620 "name": "pt2", 00:15:45.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.620 "is_configured": true, 00:15:45.620 "data_offset": 2048, 00:15:45.620 "data_size": 63488 00:15:45.620 }, 00:15:45.620 { 00:15:45.620 "name": "pt3", 00:15:45.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.620 "is_configured": true, 00:15:45.620 "data_offset": 2048, 00:15:45.620 "data_size": 63488 00:15:45.620 } 00:15:45.620 ] 00:15:45.620 } 00:15:45.620 } 00:15:45.620 }' 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:45.620 pt2 00:15:45.620 pt3' 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.620 09:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.620 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.620 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.620 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.620 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.620 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:45.620 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.620 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.620 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 [2024-11-20 09:28:11.084502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b0d29f6e-5fde-4d41-a8e0-d36c5528b705 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b0d29f6e-5fde-4d41-a8e0-d36c5528b705 ']' 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 [2024-11-20 09:28:11.128176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.880 [2024-11-20 09:28:11.128257] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.880 [2024-11-20 09:28:11.128390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.880 [2024-11-20 09:28:11.128547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.880 [2024-11-20 09:28:11.128565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 [2024-11-20 09:28:11.264062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:45.880 [2024-11-20 09:28:11.266302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:45.880 [2024-11-20 09:28:11.266369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:45.880 [2024-11-20 09:28:11.266455] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:45.880 [2024-11-20 09:28:11.266515] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:45.880 [2024-11-20 09:28:11.266539] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:45.880 [2024-11-20 09:28:11.266560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.880 [2024-11-20 09:28:11.266572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:45.880 request: 00:15:45.880 { 00:15:45.880 "name": "raid_bdev1", 00:15:45.880 "raid_level": "raid5f", 00:15:45.880 "base_bdevs": [ 00:15:45.880 "malloc1", 00:15:45.880 "malloc2", 00:15:45.880 "malloc3" 00:15:45.880 ], 00:15:45.880 "strip_size_kb": 64, 00:15:45.880 "superblock": false, 00:15:45.880 "method": "bdev_raid_create", 00:15:45.880 "req_id": 1 00:15:45.880 } 00:15:45.880 Got JSON-RPC error response 00:15:45.880 response: 00:15:45.880 { 00:15:45.880 "code": -17, 00:15:45.880 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:45.880 } 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 [2024-11-20 09:28:11.319881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:45.880 [2024-11-20 09:28:11.320015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.880 [2024-11-20 09:28:11.320077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:45.880 [2024-11-20 09:28:11.320121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.880 [2024-11-20 09:28:11.322713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.880 [2024-11-20 09:28:11.322812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:45.881 [2024-11-20 09:28:11.322972] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:45.881 [2024-11-20 09:28:11.323086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:45.881 pt1 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.881 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.138 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.138 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.138 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.138 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.138 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.138 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.138 "name": "raid_bdev1", 00:15:46.138 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:46.138 "strip_size_kb": 64, 00:15:46.139 "state": "configuring", 00:15:46.139 "raid_level": "raid5f", 00:15:46.139 "superblock": true, 00:15:46.139 "num_base_bdevs": 3, 00:15:46.139 "num_base_bdevs_discovered": 1, 00:15:46.139 "num_base_bdevs_operational": 3, 00:15:46.139 "base_bdevs_list": [ 00:15:46.139 { 00:15:46.139 "name": "pt1", 00:15:46.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.139 "is_configured": true, 00:15:46.139 "data_offset": 2048, 00:15:46.139 "data_size": 63488 00:15:46.139 }, 00:15:46.139 { 00:15:46.139 "name": null, 00:15:46.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.139 "is_configured": false, 00:15:46.139 "data_offset": 2048, 00:15:46.139 "data_size": 63488 00:15:46.139 }, 00:15:46.139 { 00:15:46.139 "name": null, 00:15:46.139 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.139 "is_configured": false, 00:15:46.139 "data_offset": 2048, 00:15:46.139 "data_size": 63488 00:15:46.139 } 00:15:46.139 ] 00:15:46.139 }' 00:15:46.139 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.139 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.397 [2024-11-20 09:28:11.759333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.397 [2024-11-20 09:28:11.759493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.397 [2024-11-20 09:28:11.759567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:46.397 [2024-11-20 09:28:11.759612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.397 [2024-11-20 09:28:11.760163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.397 [2024-11-20 09:28:11.760262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.397 [2024-11-20 09:28:11.760418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:46.397 [2024-11-20 09:28:11.760502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.397 pt2 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.397 [2024-11-20 09:28:11.771347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.397 "name": "raid_bdev1", 00:15:46.397 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:46.397 "strip_size_kb": 64, 00:15:46.397 "state": "configuring", 00:15:46.397 "raid_level": "raid5f", 00:15:46.397 "superblock": true, 00:15:46.397 "num_base_bdevs": 3, 00:15:46.397 "num_base_bdevs_discovered": 1, 00:15:46.397 "num_base_bdevs_operational": 3, 00:15:46.397 "base_bdevs_list": [ 00:15:46.397 { 00:15:46.397 "name": "pt1", 00:15:46.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.397 "is_configured": true, 00:15:46.397 "data_offset": 2048, 00:15:46.397 "data_size": 63488 00:15:46.397 }, 00:15:46.397 { 00:15:46.397 "name": null, 00:15:46.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.397 "is_configured": false, 00:15:46.397 "data_offset": 0, 00:15:46.397 "data_size": 63488 00:15:46.397 }, 00:15:46.397 { 00:15:46.397 "name": null, 00:15:46.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.397 "is_configured": false, 00:15:46.397 "data_offset": 2048, 00:15:46.397 "data_size": 63488 00:15:46.397 } 00:15:46.397 ] 00:15:46.397 }' 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.397 09:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.964 [2024-11-20 09:28:12.214575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.964 [2024-11-20 09:28:12.214661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.964 [2024-11-20 09:28:12.214685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:46.964 [2024-11-20 09:28:12.214698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.964 [2024-11-20 09:28:12.215238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.964 [2024-11-20 09:28:12.215279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.964 [2024-11-20 09:28:12.215395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:46.964 [2024-11-20 09:28:12.215449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.964 pt2 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.964 [2024-11-20 09:28:12.226583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:46.964 [2024-11-20 09:28:12.226730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.964 [2024-11-20 09:28:12.226803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:46.964 [2024-11-20 09:28:12.226882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.964 [2024-11-20 09:28:12.227515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.964 [2024-11-20 09:28:12.227597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:46.964 [2024-11-20 09:28:12.227764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:46.964 [2024-11-20 09:28:12.227839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:46.964 [2024-11-20 09:28:12.228041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:46.964 [2024-11-20 09:28:12.228089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:46.964 [2024-11-20 09:28:12.228402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:46.964 [2024-11-20 09:28:12.234860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:46.964 [2024-11-20 09:28:12.234937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:46.964 [2024-11-20 09:28:12.235221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.964 pt3 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.964 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.964 "name": "raid_bdev1", 00:15:46.964 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:46.964 "strip_size_kb": 64, 00:15:46.964 "state": "online", 00:15:46.964 "raid_level": "raid5f", 00:15:46.964 "superblock": true, 00:15:46.964 "num_base_bdevs": 3, 00:15:46.964 "num_base_bdevs_discovered": 3, 00:15:46.964 "num_base_bdevs_operational": 3, 00:15:46.964 "base_bdevs_list": [ 00:15:46.964 { 00:15:46.964 "name": "pt1", 00:15:46.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.964 "is_configured": true, 00:15:46.964 "data_offset": 2048, 00:15:46.964 "data_size": 63488 00:15:46.964 }, 00:15:46.964 { 00:15:46.964 "name": "pt2", 00:15:46.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.964 "is_configured": true, 00:15:46.964 "data_offset": 2048, 00:15:46.964 "data_size": 63488 00:15:46.964 }, 00:15:46.964 { 00:15:46.964 "name": "pt3", 00:15:46.964 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.964 "is_configured": true, 00:15:46.964 "data_offset": 2048, 00:15:46.964 "data_size": 63488 00:15:46.964 } 00:15:46.964 ] 00:15:46.964 }' 00:15:46.965 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.965 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.223 [2024-11-20 09:28:12.650613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.223 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.481 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.481 "name": "raid_bdev1", 00:15:47.481 "aliases": [ 00:15:47.481 "b0d29f6e-5fde-4d41-a8e0-d36c5528b705" 00:15:47.481 ], 00:15:47.481 "product_name": "Raid Volume", 00:15:47.481 "block_size": 512, 00:15:47.481 "num_blocks": 126976, 00:15:47.481 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:47.481 "assigned_rate_limits": { 00:15:47.481 "rw_ios_per_sec": 0, 00:15:47.481 "rw_mbytes_per_sec": 0, 00:15:47.481 "r_mbytes_per_sec": 0, 00:15:47.481 "w_mbytes_per_sec": 0 00:15:47.481 }, 00:15:47.481 "claimed": false, 00:15:47.481 "zoned": false, 00:15:47.481 "supported_io_types": { 00:15:47.481 "read": true, 00:15:47.481 "write": true, 00:15:47.481 "unmap": false, 00:15:47.481 "flush": false, 00:15:47.481 "reset": true, 00:15:47.481 "nvme_admin": false, 00:15:47.481 "nvme_io": false, 00:15:47.481 "nvme_io_md": false, 00:15:47.481 "write_zeroes": true, 00:15:47.481 "zcopy": false, 00:15:47.481 "get_zone_info": false, 00:15:47.481 "zone_management": false, 00:15:47.481 "zone_append": false, 00:15:47.481 "compare": false, 00:15:47.481 "compare_and_write": false, 00:15:47.481 "abort": false, 00:15:47.481 "seek_hole": false, 00:15:47.481 "seek_data": false, 00:15:47.481 "copy": false, 00:15:47.481 "nvme_iov_md": false 00:15:47.481 }, 00:15:47.481 "driver_specific": { 00:15:47.481 "raid": { 00:15:47.481 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:47.481 "strip_size_kb": 64, 00:15:47.481 "state": "online", 00:15:47.481 "raid_level": "raid5f", 00:15:47.481 "superblock": true, 00:15:47.481 "num_base_bdevs": 3, 00:15:47.481 "num_base_bdevs_discovered": 3, 00:15:47.481 "num_base_bdevs_operational": 3, 00:15:47.481 "base_bdevs_list": [ 00:15:47.481 { 00:15:47.481 "name": "pt1", 00:15:47.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.482 "is_configured": true, 00:15:47.482 "data_offset": 2048, 00:15:47.482 "data_size": 63488 00:15:47.482 }, 00:15:47.482 { 00:15:47.482 "name": "pt2", 00:15:47.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.482 "is_configured": true, 00:15:47.482 "data_offset": 2048, 00:15:47.482 "data_size": 63488 00:15:47.482 }, 00:15:47.482 { 00:15:47.482 "name": "pt3", 00:15:47.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.482 "is_configured": true, 00:15:47.482 "data_offset": 2048, 00:15:47.482 "data_size": 63488 00:15:47.482 } 00:15:47.482 ] 00:15:47.482 } 00:15:47.482 } 00:15:47.482 }' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:47.482 pt2 00:15:47.482 pt3' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:47.482 [2024-11-20 09:28:12.914098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.482 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b0d29f6e-5fde-4d41-a8e0-d36c5528b705 '!=' b0d29f6e-5fde-4d41-a8e0-d36c5528b705 ']' 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.740 [2024-11-20 09:28:12.973838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.740 09:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.740 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.740 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.740 "name": "raid_bdev1", 00:15:47.740 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:47.740 "strip_size_kb": 64, 00:15:47.740 "state": "online", 00:15:47.740 "raid_level": "raid5f", 00:15:47.740 "superblock": true, 00:15:47.740 "num_base_bdevs": 3, 00:15:47.740 "num_base_bdevs_discovered": 2, 00:15:47.740 "num_base_bdevs_operational": 2, 00:15:47.741 "base_bdevs_list": [ 00:15:47.741 { 00:15:47.741 "name": null, 00:15:47.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.741 "is_configured": false, 00:15:47.741 "data_offset": 0, 00:15:47.741 "data_size": 63488 00:15:47.741 }, 00:15:47.741 { 00:15:47.741 "name": "pt2", 00:15:47.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.741 "is_configured": true, 00:15:47.741 "data_offset": 2048, 00:15:47.741 "data_size": 63488 00:15:47.741 }, 00:15:47.741 { 00:15:47.741 "name": "pt3", 00:15:47.741 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.741 "is_configured": true, 00:15:47.741 "data_offset": 2048, 00:15:47.741 "data_size": 63488 00:15:47.741 } 00:15:47.741 ] 00:15:47.741 }' 00:15:47.741 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.741 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.999 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:47.999 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.999 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.999 [2024-11-20 09:28:13.417088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.999 [2024-11-20 09:28:13.417179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.999 [2024-11-20 09:28:13.417313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.999 [2024-11-20 09:28:13.417410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.999 [2024-11-20 09:28:13.417479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:47.999 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.999 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.999 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.999 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:47.999 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.999 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.259 [2024-11-20 09:28:13.492922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.259 [2024-11-20 09:28:13.493071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.259 [2024-11-20 09:28:13.493120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:48.259 [2024-11-20 09:28:13.493159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.259 [2024-11-20 09:28:13.495702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.259 [2024-11-20 09:28:13.495809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.259 [2024-11-20 09:28:13.495938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:48.259 [2024-11-20 09:28:13.496027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.259 pt2 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.259 "name": "raid_bdev1", 00:15:48.259 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:48.259 "strip_size_kb": 64, 00:15:48.259 "state": "configuring", 00:15:48.259 "raid_level": "raid5f", 00:15:48.259 "superblock": true, 00:15:48.259 "num_base_bdevs": 3, 00:15:48.259 "num_base_bdevs_discovered": 1, 00:15:48.259 "num_base_bdevs_operational": 2, 00:15:48.259 "base_bdevs_list": [ 00:15:48.259 { 00:15:48.259 "name": null, 00:15:48.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.259 "is_configured": false, 00:15:48.259 "data_offset": 2048, 00:15:48.259 "data_size": 63488 00:15:48.259 }, 00:15:48.259 { 00:15:48.259 "name": "pt2", 00:15:48.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.259 "is_configured": true, 00:15:48.259 "data_offset": 2048, 00:15:48.259 "data_size": 63488 00:15:48.259 }, 00:15:48.259 { 00:15:48.259 "name": null, 00:15:48.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.259 "is_configured": false, 00:15:48.259 "data_offset": 2048, 00:15:48.259 "data_size": 63488 00:15:48.259 } 00:15:48.259 ] 00:15:48.259 }' 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.259 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.519 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:48.519 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:48.519 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:48.519 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:48.519 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.519 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.519 [2024-11-20 09:28:13.940340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:48.519 [2024-11-20 09:28:13.940481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.519 [2024-11-20 09:28:13.940526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:48.519 [2024-11-20 09:28:13.940562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.519 [2024-11-20 09:28:13.941118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.519 [2024-11-20 09:28:13.941202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:48.519 [2024-11-20 09:28:13.941328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:48.519 [2024-11-20 09:28:13.941398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.520 [2024-11-20 09:28:13.941562] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:48.520 [2024-11-20 09:28:13.941577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:48.520 [2024-11-20 09:28:13.941853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:48.520 [2024-11-20 09:28:13.948034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:48.520 [2024-11-20 09:28:13.948067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:48.520 [2024-11-20 09:28:13.948477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.520 pt3 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.520 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.780 09:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.780 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.780 "name": "raid_bdev1", 00:15:48.780 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:48.780 "strip_size_kb": 64, 00:15:48.780 "state": "online", 00:15:48.780 "raid_level": "raid5f", 00:15:48.780 "superblock": true, 00:15:48.780 "num_base_bdevs": 3, 00:15:48.780 "num_base_bdevs_discovered": 2, 00:15:48.780 "num_base_bdevs_operational": 2, 00:15:48.780 "base_bdevs_list": [ 00:15:48.780 { 00:15:48.780 "name": null, 00:15:48.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.780 "is_configured": false, 00:15:48.780 "data_offset": 2048, 00:15:48.780 "data_size": 63488 00:15:48.780 }, 00:15:48.780 { 00:15:48.780 "name": "pt2", 00:15:48.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.780 "is_configured": true, 00:15:48.780 "data_offset": 2048, 00:15:48.780 "data_size": 63488 00:15:48.780 }, 00:15:48.780 { 00:15:48.780 "name": "pt3", 00:15:48.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.780 "is_configured": true, 00:15:48.780 "data_offset": 2048, 00:15:48.780 "data_size": 63488 00:15:48.780 } 00:15:48.780 ] 00:15:48.780 }' 00:15:48.780 09:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.780 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.038 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.039 [2024-11-20 09:28:14.416121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.039 [2024-11-20 09:28:14.416222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.039 [2024-11-20 09:28:14.416352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.039 [2024-11-20 09:28:14.416477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.039 [2024-11-20 09:28:14.416539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.039 [2024-11-20 09:28:14.476066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.039 [2024-11-20 09:28:14.476143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.039 [2024-11-20 09:28:14.476168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:49.039 [2024-11-20 09:28:14.476179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.039 [2024-11-20 09:28:14.478893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.039 [2024-11-20 09:28:14.478935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.039 [2024-11-20 09:28:14.479036] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:49.039 [2024-11-20 09:28:14.479093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.039 [2024-11-20 09:28:14.479244] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:49.039 [2024-11-20 09:28:14.479256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.039 [2024-11-20 09:28:14.479274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:49.039 [2024-11-20 09:28:14.479393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.039 pt1 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.039 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.298 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.298 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.298 "name": "raid_bdev1", 00:15:49.298 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:49.298 "strip_size_kb": 64, 00:15:49.298 "state": "configuring", 00:15:49.298 "raid_level": "raid5f", 00:15:49.298 "superblock": true, 00:15:49.298 "num_base_bdevs": 3, 00:15:49.298 "num_base_bdevs_discovered": 1, 00:15:49.298 "num_base_bdevs_operational": 2, 00:15:49.298 "base_bdevs_list": [ 00:15:49.298 { 00:15:49.298 "name": null, 00:15:49.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.298 "is_configured": false, 00:15:49.298 "data_offset": 2048, 00:15:49.298 "data_size": 63488 00:15:49.298 }, 00:15:49.298 { 00:15:49.298 "name": "pt2", 00:15:49.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.298 "is_configured": true, 00:15:49.298 "data_offset": 2048, 00:15:49.298 "data_size": 63488 00:15:49.298 }, 00:15:49.298 { 00:15:49.298 "name": null, 00:15:49.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.298 "is_configured": false, 00:15:49.298 "data_offset": 2048, 00:15:49.298 "data_size": 63488 00:15:49.298 } 00:15:49.298 ] 00:15:49.298 }' 00:15:49.298 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.298 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.557 [2024-11-20 09:28:14.963496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:49.557 [2024-11-20 09:28:14.963631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.557 [2024-11-20 09:28:14.963692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:49.557 [2024-11-20 09:28:14.963735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.557 [2024-11-20 09:28:14.964382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.557 [2024-11-20 09:28:14.964475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:49.557 [2024-11-20 09:28:14.964629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:49.557 [2024-11-20 09:28:14.964696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:49.557 [2024-11-20 09:28:14.964903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:49.557 [2024-11-20 09:28:14.964952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:49.557 [2024-11-20 09:28:14.965308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:49.557 [2024-11-20 09:28:14.973218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:49.557 [2024-11-20 09:28:14.973305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:49.557 [2024-11-20 09:28:14.973721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.557 pt3 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.557 09:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.815 09:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.815 "name": "raid_bdev1", 00:15:49.815 "uuid": "b0d29f6e-5fde-4d41-a8e0-d36c5528b705", 00:15:49.815 "strip_size_kb": 64, 00:15:49.815 "state": "online", 00:15:49.815 "raid_level": "raid5f", 00:15:49.815 "superblock": true, 00:15:49.815 "num_base_bdevs": 3, 00:15:49.815 "num_base_bdevs_discovered": 2, 00:15:49.815 "num_base_bdevs_operational": 2, 00:15:49.815 "base_bdevs_list": [ 00:15:49.815 { 00:15:49.815 "name": null, 00:15:49.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.815 "is_configured": false, 00:15:49.815 "data_offset": 2048, 00:15:49.815 "data_size": 63488 00:15:49.815 }, 00:15:49.815 { 00:15:49.815 "name": "pt2", 00:15:49.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.815 "is_configured": true, 00:15:49.815 "data_offset": 2048, 00:15:49.815 "data_size": 63488 00:15:49.815 }, 00:15:49.815 { 00:15:49.815 "name": "pt3", 00:15:49.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.815 "is_configured": true, 00:15:49.815 "data_offset": 2048, 00:15:49.815 "data_size": 63488 00:15:49.815 } 00:15:49.815 ] 00:15:49.815 }' 00:15:49.815 09:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.815 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:50.074 [2024-11-20 09:28:15.406362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b0d29f6e-5fde-4d41-a8e0-d36c5528b705 '!=' b0d29f6e-5fde-4d41-a8e0-d36c5528b705 ']' 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81533 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81533 ']' 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81533 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81533 00:15:50.074 killing process with pid 81533 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81533' 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81533 00:15:50.074 09:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81533 00:15:50.074 [2024-11-20 09:28:15.467869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.074 [2024-11-20 09:28:15.467981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.074 [2024-11-20 09:28:15.468062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.074 [2024-11-20 09:28:15.468076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:50.641 [2024-11-20 09:28:15.829835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.016 ************************************ 00:15:52.017 END TEST raid5f_superblock_test 00:15:52.017 ************************************ 00:15:52.017 09:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:52.017 00:15:52.017 real 0m7.992s 00:15:52.017 user 0m12.343s 00:15:52.017 sys 0m1.420s 00:15:52.017 09:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.017 09:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.017 09:28:17 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:52.017 09:28:17 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:52.017 09:28:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:52.017 09:28:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.017 09:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.017 ************************************ 00:15:52.017 START TEST raid5f_rebuild_test 00:15:52.017 ************************************ 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81977 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81977 00:15:52.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81977 ']' 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.017 09:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.017 [2024-11-20 09:28:17.265066] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:15:52.017 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:52.017 Zero copy mechanism will not be used. 00:15:52.017 [2024-11-20 09:28:17.265275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81977 ] 00:15:52.017 [2024-11-20 09:28:17.447577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.276 [2024-11-20 09:28:17.582973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.535 [2024-11-20 09:28:17.824271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.535 [2024-11-20 09:28:17.824328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.793 BaseBdev1_malloc 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.793 [2024-11-20 09:28:18.230299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:52.793 [2024-11-20 09:28:18.230456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.793 [2024-11-20 09:28:18.230521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:52.793 [2024-11-20 09:28:18.230564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.793 [2024-11-20 09:28:18.233025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.793 [2024-11-20 09:28:18.233128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.793 BaseBdev1 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.793 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 BaseBdev2_malloc 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 [2024-11-20 09:28:18.288015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:53.054 [2024-11-20 09:28:18.288149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.054 [2024-11-20 09:28:18.288180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:53.054 [2024-11-20 09:28:18.288197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.054 [2024-11-20 09:28:18.290719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.054 BaseBdev2 00:15:53.054 [2024-11-20 09:28:18.290803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 BaseBdev3_malloc 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 [2024-11-20 09:28:18.361213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:53.054 [2024-11-20 09:28:18.361364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.054 [2024-11-20 09:28:18.361415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:53.054 [2024-11-20 09:28:18.361473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.054 [2024-11-20 09:28:18.364016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.054 [2024-11-20 09:28:18.364109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:53.054 BaseBdev3 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 spare_malloc 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 spare_delay 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 [2024-11-20 09:28:18.429349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:53.054 [2024-11-20 09:28:18.429558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.054 [2024-11-20 09:28:18.429621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:53.054 [2024-11-20 09:28:18.429692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.054 [2024-11-20 09:28:18.432460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.054 [2024-11-20 09:28:18.432509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:53.054 spare 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 [2024-11-20 09:28:18.445481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.054 [2024-11-20 09:28:18.447774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.054 [2024-11-20 09:28:18.447917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:53.054 [2024-11-20 09:28:18.448069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:53.054 [2024-11-20 09:28:18.448119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:53.054 [2024-11-20 09:28:18.448505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:53.054 [2024-11-20 09:28:18.455328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:53.054 [2024-11-20 09:28:18.455410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:53.054 [2024-11-20 09:28:18.455794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.054 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.314 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.314 "name": "raid_bdev1", 00:15:53.314 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:15:53.314 "strip_size_kb": 64, 00:15:53.314 "state": "online", 00:15:53.314 "raid_level": "raid5f", 00:15:53.314 "superblock": false, 00:15:53.314 "num_base_bdevs": 3, 00:15:53.314 "num_base_bdevs_discovered": 3, 00:15:53.314 "num_base_bdevs_operational": 3, 00:15:53.314 "base_bdevs_list": [ 00:15:53.314 { 00:15:53.314 "name": "BaseBdev1", 00:15:53.314 "uuid": "206cbcc7-7434-539e-accb-eded459f4c8a", 00:15:53.314 "is_configured": true, 00:15:53.314 "data_offset": 0, 00:15:53.314 "data_size": 65536 00:15:53.314 }, 00:15:53.314 { 00:15:53.314 "name": "BaseBdev2", 00:15:53.314 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:15:53.314 "is_configured": true, 00:15:53.314 "data_offset": 0, 00:15:53.314 "data_size": 65536 00:15:53.314 }, 00:15:53.314 { 00:15:53.314 "name": "BaseBdev3", 00:15:53.314 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:15:53.314 "is_configured": true, 00:15:53.314 "data_offset": 0, 00:15:53.314 "data_size": 65536 00:15:53.314 } 00:15:53.314 ] 00:15:53.314 }' 00:15:53.314 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.314 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.573 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.573 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.573 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.573 09:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:53.573 [2024-11-20 09:28:18.962599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.573 09:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.573 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:53.573 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.573 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.573 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.573 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:53.573 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:53.832 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:53.832 [2024-11-20 09:28:19.273904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:54.091 /dev/nbd0 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.091 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.092 1+0 records in 00:15:54.092 1+0 records out 00:15:54.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562434 s, 7.3 MB/s 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:54.092 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:54.351 512+0 records in 00:15:54.351 512+0 records out 00:15:54.351 67108864 bytes (67 MB, 64 MiB) copied, 0.436312 s, 154 MB/s 00:15:54.351 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:54.351 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:54.351 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:54.351 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:54.351 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:54.351 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.351 09:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:54.611 [2024-11-20 09:28:20.016776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.611 [2024-11-20 09:28:20.033317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.611 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.870 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.870 "name": "raid_bdev1", 00:15:54.870 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:15:54.870 "strip_size_kb": 64, 00:15:54.870 "state": "online", 00:15:54.870 "raid_level": "raid5f", 00:15:54.870 "superblock": false, 00:15:54.870 "num_base_bdevs": 3, 00:15:54.870 "num_base_bdevs_discovered": 2, 00:15:54.870 "num_base_bdevs_operational": 2, 00:15:54.870 "base_bdevs_list": [ 00:15:54.870 { 00:15:54.870 "name": null, 00:15:54.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.870 "is_configured": false, 00:15:54.870 "data_offset": 0, 00:15:54.870 "data_size": 65536 00:15:54.870 }, 00:15:54.870 { 00:15:54.870 "name": "BaseBdev2", 00:15:54.870 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:15:54.870 "is_configured": true, 00:15:54.870 "data_offset": 0, 00:15:54.870 "data_size": 65536 00:15:54.870 }, 00:15:54.870 { 00:15:54.870 "name": "BaseBdev3", 00:15:54.870 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:15:54.870 "is_configured": true, 00:15:54.870 "data_offset": 0, 00:15:54.870 "data_size": 65536 00:15:54.870 } 00:15:54.870 ] 00:15:54.870 }' 00:15:54.870 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.870 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.152 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.152 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.152 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.152 [2024-11-20 09:28:20.488582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.152 [2024-11-20 09:28:20.508339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:55.152 09:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.152 09:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:55.152 [2024-11-20 09:28:20.517112] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.090 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.348 "name": "raid_bdev1", 00:15:56.348 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:15:56.348 "strip_size_kb": 64, 00:15:56.348 "state": "online", 00:15:56.348 "raid_level": "raid5f", 00:15:56.348 "superblock": false, 00:15:56.348 "num_base_bdevs": 3, 00:15:56.348 "num_base_bdevs_discovered": 3, 00:15:56.348 "num_base_bdevs_operational": 3, 00:15:56.348 "process": { 00:15:56.348 "type": "rebuild", 00:15:56.348 "target": "spare", 00:15:56.348 "progress": { 00:15:56.348 "blocks": 20480, 00:15:56.348 "percent": 15 00:15:56.348 } 00:15:56.348 }, 00:15:56.348 "base_bdevs_list": [ 00:15:56.348 { 00:15:56.348 "name": "spare", 00:15:56.348 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:15:56.348 "is_configured": true, 00:15:56.348 "data_offset": 0, 00:15:56.348 "data_size": 65536 00:15:56.348 }, 00:15:56.348 { 00:15:56.348 "name": "BaseBdev2", 00:15:56.348 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:15:56.348 "is_configured": true, 00:15:56.348 "data_offset": 0, 00:15:56.348 "data_size": 65536 00:15:56.348 }, 00:15:56.348 { 00:15:56.348 "name": "BaseBdev3", 00:15:56.348 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:15:56.348 "is_configured": true, 00:15:56.348 "data_offset": 0, 00:15:56.348 "data_size": 65536 00:15:56.348 } 00:15:56.348 ] 00:15:56.348 }' 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.348 [2024-11-20 09:28:21.633311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.348 [2024-11-20 09:28:21.729539] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:56.348 [2024-11-20 09:28:21.729717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.348 [2024-11-20 09:28:21.729780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.348 [2024-11-20 09:28:21.729823] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.348 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.606 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.606 "name": "raid_bdev1", 00:15:56.606 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:15:56.606 "strip_size_kb": 64, 00:15:56.607 "state": "online", 00:15:56.607 "raid_level": "raid5f", 00:15:56.607 "superblock": false, 00:15:56.607 "num_base_bdevs": 3, 00:15:56.607 "num_base_bdevs_discovered": 2, 00:15:56.607 "num_base_bdevs_operational": 2, 00:15:56.607 "base_bdevs_list": [ 00:15:56.607 { 00:15:56.607 "name": null, 00:15:56.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.607 "is_configured": false, 00:15:56.607 "data_offset": 0, 00:15:56.607 "data_size": 65536 00:15:56.607 }, 00:15:56.607 { 00:15:56.607 "name": "BaseBdev2", 00:15:56.607 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:15:56.607 "is_configured": true, 00:15:56.607 "data_offset": 0, 00:15:56.607 "data_size": 65536 00:15:56.607 }, 00:15:56.607 { 00:15:56.607 "name": "BaseBdev3", 00:15:56.607 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:15:56.607 "is_configured": true, 00:15:56.607 "data_offset": 0, 00:15:56.607 "data_size": 65536 00:15:56.607 } 00:15:56.607 ] 00:15:56.607 }' 00:15:56.607 09:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.607 09:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.866 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.866 "name": "raid_bdev1", 00:15:56.866 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:15:56.866 "strip_size_kb": 64, 00:15:56.866 "state": "online", 00:15:56.866 "raid_level": "raid5f", 00:15:56.866 "superblock": false, 00:15:56.866 "num_base_bdevs": 3, 00:15:56.866 "num_base_bdevs_discovered": 2, 00:15:56.866 "num_base_bdevs_operational": 2, 00:15:56.866 "base_bdevs_list": [ 00:15:56.866 { 00:15:56.866 "name": null, 00:15:56.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.866 "is_configured": false, 00:15:56.866 "data_offset": 0, 00:15:56.866 "data_size": 65536 00:15:56.866 }, 00:15:56.866 { 00:15:56.867 "name": "BaseBdev2", 00:15:56.867 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:15:56.867 "is_configured": true, 00:15:56.867 "data_offset": 0, 00:15:56.867 "data_size": 65536 00:15:56.867 }, 00:15:56.867 { 00:15:56.867 "name": "BaseBdev3", 00:15:56.867 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:15:56.867 "is_configured": true, 00:15:56.867 "data_offset": 0, 00:15:56.867 "data_size": 65536 00:15:56.867 } 00:15:56.867 ] 00:15:56.867 }' 00:15:56.867 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.154 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.154 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.154 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.154 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.154 09:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.154 09:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.154 [2024-11-20 09:28:22.385021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.154 [2024-11-20 09:28:22.401725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:57.154 09:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.154 09:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:57.154 [2024-11-20 09:28:22.409713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.104 "name": "raid_bdev1", 00:15:58.104 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:15:58.104 "strip_size_kb": 64, 00:15:58.104 "state": "online", 00:15:58.104 "raid_level": "raid5f", 00:15:58.104 "superblock": false, 00:15:58.104 "num_base_bdevs": 3, 00:15:58.104 "num_base_bdevs_discovered": 3, 00:15:58.104 "num_base_bdevs_operational": 3, 00:15:58.104 "process": { 00:15:58.104 "type": "rebuild", 00:15:58.104 "target": "spare", 00:15:58.104 "progress": { 00:15:58.104 "blocks": 20480, 00:15:58.104 "percent": 15 00:15:58.104 } 00:15:58.104 }, 00:15:58.104 "base_bdevs_list": [ 00:15:58.104 { 00:15:58.104 "name": "spare", 00:15:58.104 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:15:58.104 "is_configured": true, 00:15:58.104 "data_offset": 0, 00:15:58.104 "data_size": 65536 00:15:58.104 }, 00:15:58.104 { 00:15:58.104 "name": "BaseBdev2", 00:15:58.104 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:15:58.104 "is_configured": true, 00:15:58.104 "data_offset": 0, 00:15:58.104 "data_size": 65536 00:15:58.104 }, 00:15:58.104 { 00:15:58.104 "name": "BaseBdev3", 00:15:58.104 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:15:58.104 "is_configured": true, 00:15:58.104 "data_offset": 0, 00:15:58.104 "data_size": 65536 00:15:58.104 } 00:15:58.104 ] 00:15:58.104 }' 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.104 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=578 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.362 "name": "raid_bdev1", 00:15:58.362 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:15:58.362 "strip_size_kb": 64, 00:15:58.362 "state": "online", 00:15:58.362 "raid_level": "raid5f", 00:15:58.362 "superblock": false, 00:15:58.362 "num_base_bdevs": 3, 00:15:58.362 "num_base_bdevs_discovered": 3, 00:15:58.362 "num_base_bdevs_operational": 3, 00:15:58.362 "process": { 00:15:58.362 "type": "rebuild", 00:15:58.362 "target": "spare", 00:15:58.362 "progress": { 00:15:58.362 "blocks": 22528, 00:15:58.362 "percent": 17 00:15:58.362 } 00:15:58.362 }, 00:15:58.362 "base_bdevs_list": [ 00:15:58.362 { 00:15:58.362 "name": "spare", 00:15:58.362 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:15:58.362 "is_configured": true, 00:15:58.362 "data_offset": 0, 00:15:58.362 "data_size": 65536 00:15:58.362 }, 00:15:58.362 { 00:15:58.362 "name": "BaseBdev2", 00:15:58.362 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:15:58.362 "is_configured": true, 00:15:58.362 "data_offset": 0, 00:15:58.362 "data_size": 65536 00:15:58.362 }, 00:15:58.362 { 00:15:58.362 "name": "BaseBdev3", 00:15:58.362 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:15:58.362 "is_configured": true, 00:15:58.362 "data_offset": 0, 00:15:58.362 "data_size": 65536 00:15:58.362 } 00:15:58.362 ] 00:15:58.362 }' 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.362 09:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.296 09:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.553 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.553 "name": "raid_bdev1", 00:15:59.553 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:15:59.553 "strip_size_kb": 64, 00:15:59.553 "state": "online", 00:15:59.553 "raid_level": "raid5f", 00:15:59.553 "superblock": false, 00:15:59.553 "num_base_bdevs": 3, 00:15:59.553 "num_base_bdevs_discovered": 3, 00:15:59.553 "num_base_bdevs_operational": 3, 00:15:59.553 "process": { 00:15:59.553 "type": "rebuild", 00:15:59.553 "target": "spare", 00:15:59.553 "progress": { 00:15:59.553 "blocks": 45056, 00:15:59.553 "percent": 34 00:15:59.553 } 00:15:59.553 }, 00:15:59.553 "base_bdevs_list": [ 00:15:59.553 { 00:15:59.553 "name": "spare", 00:15:59.553 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:15:59.553 "is_configured": true, 00:15:59.553 "data_offset": 0, 00:15:59.553 "data_size": 65536 00:15:59.553 }, 00:15:59.553 { 00:15:59.553 "name": "BaseBdev2", 00:15:59.553 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:15:59.553 "is_configured": true, 00:15:59.553 "data_offset": 0, 00:15:59.553 "data_size": 65536 00:15:59.553 }, 00:15:59.553 { 00:15:59.553 "name": "BaseBdev3", 00:15:59.553 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:15:59.553 "is_configured": true, 00:15:59.553 "data_offset": 0, 00:15:59.553 "data_size": 65536 00:15:59.553 } 00:15:59.553 ] 00:15:59.553 }' 00:15:59.553 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.553 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.553 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.553 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.553 09:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.488 "name": "raid_bdev1", 00:16:00.488 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:16:00.488 "strip_size_kb": 64, 00:16:00.488 "state": "online", 00:16:00.488 "raid_level": "raid5f", 00:16:00.488 "superblock": false, 00:16:00.488 "num_base_bdevs": 3, 00:16:00.488 "num_base_bdevs_discovered": 3, 00:16:00.488 "num_base_bdevs_operational": 3, 00:16:00.488 "process": { 00:16:00.488 "type": "rebuild", 00:16:00.488 "target": "spare", 00:16:00.488 "progress": { 00:16:00.488 "blocks": 69632, 00:16:00.488 "percent": 53 00:16:00.488 } 00:16:00.488 }, 00:16:00.488 "base_bdevs_list": [ 00:16:00.488 { 00:16:00.488 "name": "spare", 00:16:00.488 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:16:00.488 "is_configured": true, 00:16:00.488 "data_offset": 0, 00:16:00.488 "data_size": 65536 00:16:00.488 }, 00:16:00.488 { 00:16:00.488 "name": "BaseBdev2", 00:16:00.488 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:16:00.488 "is_configured": true, 00:16:00.488 "data_offset": 0, 00:16:00.488 "data_size": 65536 00:16:00.488 }, 00:16:00.488 { 00:16:00.488 "name": "BaseBdev3", 00:16:00.488 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:16:00.488 "is_configured": true, 00:16:00.488 "data_offset": 0, 00:16:00.488 "data_size": 65536 00:16:00.488 } 00:16:00.488 ] 00:16:00.488 }' 00:16:00.488 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.747 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.747 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.747 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.747 09:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.686 09:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.686 09:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.686 09:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.686 "name": "raid_bdev1", 00:16:01.686 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:16:01.686 "strip_size_kb": 64, 00:16:01.686 "state": "online", 00:16:01.686 "raid_level": "raid5f", 00:16:01.686 "superblock": false, 00:16:01.687 "num_base_bdevs": 3, 00:16:01.687 "num_base_bdevs_discovered": 3, 00:16:01.687 "num_base_bdevs_operational": 3, 00:16:01.687 "process": { 00:16:01.687 "type": "rebuild", 00:16:01.687 "target": "spare", 00:16:01.687 "progress": { 00:16:01.687 "blocks": 92160, 00:16:01.687 "percent": 70 00:16:01.687 } 00:16:01.687 }, 00:16:01.687 "base_bdevs_list": [ 00:16:01.687 { 00:16:01.687 "name": "spare", 00:16:01.687 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:16:01.687 "is_configured": true, 00:16:01.687 "data_offset": 0, 00:16:01.687 "data_size": 65536 00:16:01.687 }, 00:16:01.687 { 00:16:01.687 "name": "BaseBdev2", 00:16:01.687 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:16:01.687 "is_configured": true, 00:16:01.687 "data_offset": 0, 00:16:01.687 "data_size": 65536 00:16:01.687 }, 00:16:01.687 { 00:16:01.687 "name": "BaseBdev3", 00:16:01.687 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:16:01.687 "is_configured": true, 00:16:01.687 "data_offset": 0, 00:16:01.687 "data_size": 65536 00:16:01.687 } 00:16:01.687 ] 00:16:01.687 }' 00:16:01.687 09:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.687 09:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.687 09:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.946 09:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.946 09:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.884 "name": "raid_bdev1", 00:16:02.884 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:16:02.884 "strip_size_kb": 64, 00:16:02.884 "state": "online", 00:16:02.884 "raid_level": "raid5f", 00:16:02.884 "superblock": false, 00:16:02.884 "num_base_bdevs": 3, 00:16:02.884 "num_base_bdevs_discovered": 3, 00:16:02.884 "num_base_bdevs_operational": 3, 00:16:02.884 "process": { 00:16:02.884 "type": "rebuild", 00:16:02.884 "target": "spare", 00:16:02.884 "progress": { 00:16:02.884 "blocks": 114688, 00:16:02.884 "percent": 87 00:16:02.884 } 00:16:02.884 }, 00:16:02.884 "base_bdevs_list": [ 00:16:02.884 { 00:16:02.884 "name": "spare", 00:16:02.884 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:16:02.884 "is_configured": true, 00:16:02.884 "data_offset": 0, 00:16:02.884 "data_size": 65536 00:16:02.884 }, 00:16:02.884 { 00:16:02.884 "name": "BaseBdev2", 00:16:02.884 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:16:02.884 "is_configured": true, 00:16:02.884 "data_offset": 0, 00:16:02.884 "data_size": 65536 00:16:02.884 }, 00:16:02.884 { 00:16:02.884 "name": "BaseBdev3", 00:16:02.884 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:16:02.884 "is_configured": true, 00:16:02.884 "data_offset": 0, 00:16:02.884 "data_size": 65536 00:16:02.884 } 00:16:02.884 ] 00:16:02.884 }' 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.884 09:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.454 [2024-11-20 09:28:28.871342] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:03.454 [2024-11-20 09:28:28.871589] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:03.454 [2024-11-20 09:28:28.871688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.085 "name": "raid_bdev1", 00:16:04.085 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:16:04.085 "strip_size_kb": 64, 00:16:04.085 "state": "online", 00:16:04.085 "raid_level": "raid5f", 00:16:04.085 "superblock": false, 00:16:04.085 "num_base_bdevs": 3, 00:16:04.085 "num_base_bdevs_discovered": 3, 00:16:04.085 "num_base_bdevs_operational": 3, 00:16:04.085 "base_bdevs_list": [ 00:16:04.085 { 00:16:04.085 "name": "spare", 00:16:04.085 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:16:04.085 "is_configured": true, 00:16:04.085 "data_offset": 0, 00:16:04.085 "data_size": 65536 00:16:04.085 }, 00:16:04.085 { 00:16:04.085 "name": "BaseBdev2", 00:16:04.085 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:16:04.085 "is_configured": true, 00:16:04.085 "data_offset": 0, 00:16:04.085 "data_size": 65536 00:16:04.085 }, 00:16:04.085 { 00:16:04.085 "name": "BaseBdev3", 00:16:04.085 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:16:04.085 "is_configured": true, 00:16:04.085 "data_offset": 0, 00:16:04.085 "data_size": 65536 00:16:04.085 } 00:16:04.085 ] 00:16:04.085 }' 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.085 "name": "raid_bdev1", 00:16:04.085 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:16:04.085 "strip_size_kb": 64, 00:16:04.085 "state": "online", 00:16:04.085 "raid_level": "raid5f", 00:16:04.085 "superblock": false, 00:16:04.085 "num_base_bdevs": 3, 00:16:04.085 "num_base_bdevs_discovered": 3, 00:16:04.085 "num_base_bdevs_operational": 3, 00:16:04.085 "base_bdevs_list": [ 00:16:04.085 { 00:16:04.085 "name": "spare", 00:16:04.085 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:16:04.085 "is_configured": true, 00:16:04.085 "data_offset": 0, 00:16:04.085 "data_size": 65536 00:16:04.085 }, 00:16:04.085 { 00:16:04.085 "name": "BaseBdev2", 00:16:04.085 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:16:04.085 "is_configured": true, 00:16:04.085 "data_offset": 0, 00:16:04.085 "data_size": 65536 00:16:04.085 }, 00:16:04.085 { 00:16:04.085 "name": "BaseBdev3", 00:16:04.085 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:16:04.085 "is_configured": true, 00:16:04.085 "data_offset": 0, 00:16:04.085 "data_size": 65536 00:16:04.085 } 00:16:04.085 ] 00:16:04.085 }' 00:16:04.085 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.345 "name": "raid_bdev1", 00:16:04.345 "uuid": "8e0492f8-cc5a-459f-ad51-e8e2ec835fe0", 00:16:04.345 "strip_size_kb": 64, 00:16:04.345 "state": "online", 00:16:04.345 "raid_level": "raid5f", 00:16:04.345 "superblock": false, 00:16:04.345 "num_base_bdevs": 3, 00:16:04.345 "num_base_bdevs_discovered": 3, 00:16:04.345 "num_base_bdevs_operational": 3, 00:16:04.345 "base_bdevs_list": [ 00:16:04.345 { 00:16:04.345 "name": "spare", 00:16:04.345 "uuid": "bf400bc6-4c73-53a0-adaf-79d379b376f9", 00:16:04.345 "is_configured": true, 00:16:04.345 "data_offset": 0, 00:16:04.345 "data_size": 65536 00:16:04.345 }, 00:16:04.345 { 00:16:04.345 "name": "BaseBdev2", 00:16:04.345 "uuid": "012f246d-5c3d-5219-8777-8a5a7afea297", 00:16:04.345 "is_configured": true, 00:16:04.345 "data_offset": 0, 00:16:04.345 "data_size": 65536 00:16:04.345 }, 00:16:04.345 { 00:16:04.345 "name": "BaseBdev3", 00:16:04.345 "uuid": "061e7b25-d53b-5c5d-aa61-9612703ca041", 00:16:04.345 "is_configured": true, 00:16:04.345 "data_offset": 0, 00:16:04.345 "data_size": 65536 00:16:04.345 } 00:16:04.345 ] 00:16:04.345 }' 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.345 09:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.691 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:04.691 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.691 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.691 [2024-11-20 09:28:30.087620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.691 [2024-11-20 09:28:30.087663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.691 [2024-11-20 09:28:30.087777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.691 [2024-11-20 09:28:30.087879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.691 [2024-11-20 09:28:30.087899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:04.691 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.691 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.691 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:04.691 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.691 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.691 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.950 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:04.950 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:04.950 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:04.950 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:04.950 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.950 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:04.950 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.951 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:04.951 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.951 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:04.951 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.951 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:04.951 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:04.951 /dev/nbd0 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.209 1+0 records in 00:16:05.209 1+0 records out 00:16:05.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507167 s, 8.1 MB/s 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.209 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:05.468 /dev/nbd1 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.469 1+0 records in 00:16:05.469 1+0 records out 00:16:05.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477022 s, 8.6 MB/s 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.469 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:05.727 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:05.727 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.727 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:05.727 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.727 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:05.727 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.727 09:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.985 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81977 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81977 ']' 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81977 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81977 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.244 killing process with pid 81977 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81977' 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81977 00:16:06.244 Received shutdown signal, test time was about 60.000000 seconds 00:16:06.244 00:16:06.244 Latency(us) 00:16:06.244 [2024-11-20T09:28:31.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.244 [2024-11-20T09:28:31.700Z] =================================================================================================================== 00:16:06.244 [2024-11-20T09:28:31.700Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:06.244 [2024-11-20 09:28:31.515874] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.244 09:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81977 00:16:06.811 [2024-11-20 09:28:31.963388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.756 09:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:07.756 00:16:07.756 real 0m16.049s 00:16:07.756 user 0m19.764s 00:16:07.756 sys 0m2.235s 00:16:07.756 09:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.756 09:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.756 ************************************ 00:16:07.756 END TEST raid5f_rebuild_test 00:16:07.756 ************************************ 00:16:08.014 09:28:33 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:08.014 09:28:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:08.014 09:28:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.014 09:28:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.014 ************************************ 00:16:08.014 START TEST raid5f_rebuild_test_sb 00:16:08.014 ************************************ 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82431 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82431 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82431 ']' 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.014 09:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.014 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:08.014 Zero copy mechanism will not be used. 00:16:08.014 [2024-11-20 09:28:33.375660] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:16:08.014 [2024-11-20 09:28:33.375791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82431 ] 00:16:08.273 [2024-11-20 09:28:33.550567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.273 [2024-11-20 09:28:33.674986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.531 [2024-11-20 09:28:33.896634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.531 [2024-11-20 09:28:33.896718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.099 BaseBdev1_malloc 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.099 [2024-11-20 09:28:34.359881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:09.099 [2024-11-20 09:28:34.359959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.099 [2024-11-20 09:28:34.360004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:09.099 [2024-11-20 09:28:34.360019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.099 [2024-11-20 09:28:34.362411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.099 [2024-11-20 09:28:34.362484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.099 BaseBdev1 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.099 BaseBdev2_malloc 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.099 [2024-11-20 09:28:34.420181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:09.099 [2024-11-20 09:28:34.420255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.099 [2024-11-20 09:28:34.420277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:09.099 [2024-11-20 09:28:34.420289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.099 [2024-11-20 09:28:34.422597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.099 [2024-11-20 09:28:34.422637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:09.099 BaseBdev2 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.099 BaseBdev3_malloc 00:16:09.099 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.100 [2024-11-20 09:28:34.497099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:09.100 [2024-11-20 09:28:34.497172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.100 [2024-11-20 09:28:34.497198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:09.100 [2024-11-20 09:28:34.497211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.100 [2024-11-20 09:28:34.499632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.100 [2024-11-20 09:28:34.499681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:09.100 BaseBdev3 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.100 spare_malloc 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.100 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.401 spare_delay 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.401 [2024-11-20 09:28:34.570069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.401 [2024-11-20 09:28:34.570143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.401 [2024-11-20 09:28:34.570167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:09.401 [2024-11-20 09:28:34.570179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.401 [2024-11-20 09:28:34.572615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.401 [2024-11-20 09:28:34.572663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.401 spare 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.401 [2024-11-20 09:28:34.582160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.401 [2024-11-20 09:28:34.584157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.401 [2024-11-20 09:28:34.584235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.401 [2024-11-20 09:28:34.584456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:09.401 [2024-11-20 09:28:34.584481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:09.401 [2024-11-20 09:28:34.584791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:09.401 [2024-11-20 09:28:34.591204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:09.401 [2024-11-20 09:28:34.591234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:09.401 [2024-11-20 09:28:34.591487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.401 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.401 "name": "raid_bdev1", 00:16:09.401 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:09.401 "strip_size_kb": 64, 00:16:09.402 "state": "online", 00:16:09.402 "raid_level": "raid5f", 00:16:09.402 "superblock": true, 00:16:09.402 "num_base_bdevs": 3, 00:16:09.402 "num_base_bdevs_discovered": 3, 00:16:09.402 "num_base_bdevs_operational": 3, 00:16:09.402 "base_bdevs_list": [ 00:16:09.402 { 00:16:09.402 "name": "BaseBdev1", 00:16:09.402 "uuid": "a5351c69-a40c-53bf-9324-818f5435cad3", 00:16:09.402 "is_configured": true, 00:16:09.402 "data_offset": 2048, 00:16:09.402 "data_size": 63488 00:16:09.402 }, 00:16:09.402 { 00:16:09.402 "name": "BaseBdev2", 00:16:09.402 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:09.402 "is_configured": true, 00:16:09.402 "data_offset": 2048, 00:16:09.402 "data_size": 63488 00:16:09.402 }, 00:16:09.402 { 00:16:09.402 "name": "BaseBdev3", 00:16:09.402 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:09.402 "is_configured": true, 00:16:09.402 "data_offset": 2048, 00:16:09.402 "data_size": 63488 00:16:09.402 } 00:16:09.402 ] 00:16:09.402 }' 00:16:09.402 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.402 09:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.663 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:09.663 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.663 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.663 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.663 [2024-11-20 09:28:35.054260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.664 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.664 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:09.664 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.664 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.664 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.664 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:09.664 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.924 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:09.924 [2024-11-20 09:28:35.353580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:10.184 /dev/nbd0 00:16:10.184 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:10.184 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.185 1+0 records in 00:16:10.185 1+0 records out 00:16:10.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438018 s, 9.4 MB/s 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:10.185 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:10.444 496+0 records in 00:16:10.444 496+0 records out 00:16:10.444 65011712 bytes (65 MB, 62 MiB) copied, 0.420413 s, 155 MB/s 00:16:10.444 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:10.444 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.444 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:10.444 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.444 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:10.444 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.444 09:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:10.702 [2024-11-20 09:28:36.082369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.702 [2024-11-20 09:28:36.119537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.702 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.961 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.961 "name": "raid_bdev1", 00:16:10.961 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:10.961 "strip_size_kb": 64, 00:16:10.961 "state": "online", 00:16:10.961 "raid_level": "raid5f", 00:16:10.961 "superblock": true, 00:16:10.961 "num_base_bdevs": 3, 00:16:10.961 "num_base_bdevs_discovered": 2, 00:16:10.961 "num_base_bdevs_operational": 2, 00:16:10.961 "base_bdevs_list": [ 00:16:10.961 { 00:16:10.961 "name": null, 00:16:10.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.961 "is_configured": false, 00:16:10.961 "data_offset": 0, 00:16:10.961 "data_size": 63488 00:16:10.961 }, 00:16:10.961 { 00:16:10.961 "name": "BaseBdev2", 00:16:10.961 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:10.961 "is_configured": true, 00:16:10.961 "data_offset": 2048, 00:16:10.961 "data_size": 63488 00:16:10.961 }, 00:16:10.961 { 00:16:10.961 "name": "BaseBdev3", 00:16:10.961 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:10.961 "is_configured": true, 00:16:10.961 "data_offset": 2048, 00:16:10.961 "data_size": 63488 00:16:10.961 } 00:16:10.961 ] 00:16:10.961 }' 00:16:10.961 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.961 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.220 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.220 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.220 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.220 [2024-11-20 09:28:36.570906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.220 [2024-11-20 09:28:36.591528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:11.220 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.220 09:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:11.220 [2024-11-20 09:28:36.601463] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.158 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.158 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.158 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.158 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.158 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.158 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.158 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.158 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.158 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.418 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.418 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.418 "name": "raid_bdev1", 00:16:12.418 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:12.418 "strip_size_kb": 64, 00:16:12.418 "state": "online", 00:16:12.418 "raid_level": "raid5f", 00:16:12.418 "superblock": true, 00:16:12.418 "num_base_bdevs": 3, 00:16:12.418 "num_base_bdevs_discovered": 3, 00:16:12.418 "num_base_bdevs_operational": 3, 00:16:12.418 "process": { 00:16:12.418 "type": "rebuild", 00:16:12.418 "target": "spare", 00:16:12.418 "progress": { 00:16:12.418 "blocks": 18432, 00:16:12.418 "percent": 14 00:16:12.418 } 00:16:12.418 }, 00:16:12.418 "base_bdevs_list": [ 00:16:12.418 { 00:16:12.418 "name": "spare", 00:16:12.419 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:12.419 "is_configured": true, 00:16:12.419 "data_offset": 2048, 00:16:12.419 "data_size": 63488 00:16:12.419 }, 00:16:12.419 { 00:16:12.419 "name": "BaseBdev2", 00:16:12.419 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:12.419 "is_configured": true, 00:16:12.419 "data_offset": 2048, 00:16:12.419 "data_size": 63488 00:16:12.419 }, 00:16:12.419 { 00:16:12.419 "name": "BaseBdev3", 00:16:12.419 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:12.419 "is_configured": true, 00:16:12.419 "data_offset": 2048, 00:16:12.419 "data_size": 63488 00:16:12.419 } 00:16:12.419 ] 00:16:12.419 }' 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.419 [2024-11-20 09:28:37.757583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.419 [2024-11-20 09:28:37.813759] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.419 [2024-11-20 09:28:37.813849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.419 [2024-11-20 09:28:37.813873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.419 [2024-11-20 09:28:37.813883] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.419 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.678 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.678 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.678 "name": "raid_bdev1", 00:16:12.678 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:12.678 "strip_size_kb": 64, 00:16:12.678 "state": "online", 00:16:12.678 "raid_level": "raid5f", 00:16:12.678 "superblock": true, 00:16:12.678 "num_base_bdevs": 3, 00:16:12.678 "num_base_bdevs_discovered": 2, 00:16:12.678 "num_base_bdevs_operational": 2, 00:16:12.678 "base_bdevs_list": [ 00:16:12.678 { 00:16:12.678 "name": null, 00:16:12.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.678 "is_configured": false, 00:16:12.678 "data_offset": 0, 00:16:12.678 "data_size": 63488 00:16:12.678 }, 00:16:12.678 { 00:16:12.678 "name": "BaseBdev2", 00:16:12.678 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:12.679 "is_configured": true, 00:16:12.679 "data_offset": 2048, 00:16:12.679 "data_size": 63488 00:16:12.679 }, 00:16:12.679 { 00:16:12.679 "name": "BaseBdev3", 00:16:12.679 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:12.679 "is_configured": true, 00:16:12.679 "data_offset": 2048, 00:16:12.679 "data_size": 63488 00:16:12.679 } 00:16:12.679 ] 00:16:12.679 }' 00:16:12.679 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.679 09:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.937 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.937 "name": "raid_bdev1", 00:16:12.937 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:12.938 "strip_size_kb": 64, 00:16:12.938 "state": "online", 00:16:12.938 "raid_level": "raid5f", 00:16:12.938 "superblock": true, 00:16:12.938 "num_base_bdevs": 3, 00:16:12.938 "num_base_bdevs_discovered": 2, 00:16:12.938 "num_base_bdevs_operational": 2, 00:16:12.938 "base_bdevs_list": [ 00:16:12.938 { 00:16:12.938 "name": null, 00:16:12.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.938 "is_configured": false, 00:16:12.938 "data_offset": 0, 00:16:12.938 "data_size": 63488 00:16:12.938 }, 00:16:12.938 { 00:16:12.938 "name": "BaseBdev2", 00:16:12.938 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:12.938 "is_configured": true, 00:16:12.938 "data_offset": 2048, 00:16:12.938 "data_size": 63488 00:16:12.938 }, 00:16:12.938 { 00:16:12.938 "name": "BaseBdev3", 00:16:12.938 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:12.938 "is_configured": true, 00:16:12.938 "data_offset": 2048, 00:16:12.938 "data_size": 63488 00:16:12.938 } 00:16:12.938 ] 00:16:12.938 }' 00:16:12.938 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.938 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.196 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.196 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.196 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.196 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.196 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.196 [2024-11-20 09:28:38.446585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.196 [2024-11-20 09:28:38.466038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:13.196 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.196 09:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:13.197 [2024-11-20 09:28:38.475603] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.186 "name": "raid_bdev1", 00:16:14.186 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:14.186 "strip_size_kb": 64, 00:16:14.186 "state": "online", 00:16:14.186 "raid_level": "raid5f", 00:16:14.186 "superblock": true, 00:16:14.186 "num_base_bdevs": 3, 00:16:14.186 "num_base_bdevs_discovered": 3, 00:16:14.186 "num_base_bdevs_operational": 3, 00:16:14.186 "process": { 00:16:14.186 "type": "rebuild", 00:16:14.186 "target": "spare", 00:16:14.186 "progress": { 00:16:14.186 "blocks": 18432, 00:16:14.186 "percent": 14 00:16:14.186 } 00:16:14.186 }, 00:16:14.186 "base_bdevs_list": [ 00:16:14.186 { 00:16:14.186 "name": "spare", 00:16:14.186 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:14.186 "is_configured": true, 00:16:14.186 "data_offset": 2048, 00:16:14.186 "data_size": 63488 00:16:14.186 }, 00:16:14.186 { 00:16:14.186 "name": "BaseBdev2", 00:16:14.186 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:14.186 "is_configured": true, 00:16:14.186 "data_offset": 2048, 00:16:14.186 "data_size": 63488 00:16:14.186 }, 00:16:14.186 { 00:16:14.186 "name": "BaseBdev3", 00:16:14.186 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:14.186 "is_configured": true, 00:16:14.186 "data_offset": 2048, 00:16:14.186 "data_size": 63488 00:16:14.186 } 00:16:14.186 ] 00:16:14.186 }' 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.186 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:14.444 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=594 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.444 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.445 "name": "raid_bdev1", 00:16:14.445 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:14.445 "strip_size_kb": 64, 00:16:14.445 "state": "online", 00:16:14.445 "raid_level": "raid5f", 00:16:14.445 "superblock": true, 00:16:14.445 "num_base_bdevs": 3, 00:16:14.445 "num_base_bdevs_discovered": 3, 00:16:14.445 "num_base_bdevs_operational": 3, 00:16:14.445 "process": { 00:16:14.445 "type": "rebuild", 00:16:14.445 "target": "spare", 00:16:14.445 "progress": { 00:16:14.445 "blocks": 22528, 00:16:14.445 "percent": 17 00:16:14.445 } 00:16:14.445 }, 00:16:14.445 "base_bdevs_list": [ 00:16:14.445 { 00:16:14.445 "name": "spare", 00:16:14.445 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:14.445 "is_configured": true, 00:16:14.445 "data_offset": 2048, 00:16:14.445 "data_size": 63488 00:16:14.445 }, 00:16:14.445 { 00:16:14.445 "name": "BaseBdev2", 00:16:14.445 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:14.445 "is_configured": true, 00:16:14.445 "data_offset": 2048, 00:16:14.445 "data_size": 63488 00:16:14.445 }, 00:16:14.445 { 00:16:14.445 "name": "BaseBdev3", 00:16:14.445 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:14.445 "is_configured": true, 00:16:14.445 "data_offset": 2048, 00:16:14.445 "data_size": 63488 00:16:14.445 } 00:16:14.445 ] 00:16:14.445 }' 00:16:14.445 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.445 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.445 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.445 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.445 09:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.381 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.381 "name": "raid_bdev1", 00:16:15.381 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:15.381 "strip_size_kb": 64, 00:16:15.381 "state": "online", 00:16:15.381 "raid_level": "raid5f", 00:16:15.381 "superblock": true, 00:16:15.381 "num_base_bdevs": 3, 00:16:15.381 "num_base_bdevs_discovered": 3, 00:16:15.381 "num_base_bdevs_operational": 3, 00:16:15.381 "process": { 00:16:15.381 "type": "rebuild", 00:16:15.381 "target": "spare", 00:16:15.381 "progress": { 00:16:15.381 "blocks": 45056, 00:16:15.381 "percent": 35 00:16:15.381 } 00:16:15.381 }, 00:16:15.381 "base_bdevs_list": [ 00:16:15.381 { 00:16:15.381 "name": "spare", 00:16:15.381 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:15.381 "is_configured": true, 00:16:15.381 "data_offset": 2048, 00:16:15.381 "data_size": 63488 00:16:15.381 }, 00:16:15.381 { 00:16:15.381 "name": "BaseBdev2", 00:16:15.381 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:15.381 "is_configured": true, 00:16:15.381 "data_offset": 2048, 00:16:15.381 "data_size": 63488 00:16:15.381 }, 00:16:15.381 { 00:16:15.381 "name": "BaseBdev3", 00:16:15.381 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:15.381 "is_configured": true, 00:16:15.381 "data_offset": 2048, 00:16:15.381 "data_size": 63488 00:16:15.381 } 00:16:15.381 ] 00:16:15.381 }' 00:16:15.640 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.640 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.640 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.640 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.640 09:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.577 "name": "raid_bdev1", 00:16:16.577 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:16.577 "strip_size_kb": 64, 00:16:16.577 "state": "online", 00:16:16.577 "raid_level": "raid5f", 00:16:16.577 "superblock": true, 00:16:16.577 "num_base_bdevs": 3, 00:16:16.577 "num_base_bdevs_discovered": 3, 00:16:16.577 "num_base_bdevs_operational": 3, 00:16:16.577 "process": { 00:16:16.577 "type": "rebuild", 00:16:16.577 "target": "spare", 00:16:16.577 "progress": { 00:16:16.577 "blocks": 69632, 00:16:16.577 "percent": 54 00:16:16.577 } 00:16:16.577 }, 00:16:16.577 "base_bdevs_list": [ 00:16:16.577 { 00:16:16.577 "name": "spare", 00:16:16.577 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:16.577 "is_configured": true, 00:16:16.577 "data_offset": 2048, 00:16:16.577 "data_size": 63488 00:16:16.577 }, 00:16:16.577 { 00:16:16.577 "name": "BaseBdev2", 00:16:16.577 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:16.577 "is_configured": true, 00:16:16.577 "data_offset": 2048, 00:16:16.577 "data_size": 63488 00:16:16.577 }, 00:16:16.577 { 00:16:16.577 "name": "BaseBdev3", 00:16:16.577 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:16.577 "is_configured": true, 00:16:16.577 "data_offset": 2048, 00:16:16.577 "data_size": 63488 00:16:16.577 } 00:16:16.577 ] 00:16:16.577 }' 00:16:16.577 09:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.577 09:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.577 09:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.836 09:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.836 09:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.771 "name": "raid_bdev1", 00:16:17.771 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:17.771 "strip_size_kb": 64, 00:16:17.771 "state": "online", 00:16:17.771 "raid_level": "raid5f", 00:16:17.771 "superblock": true, 00:16:17.771 "num_base_bdevs": 3, 00:16:17.771 "num_base_bdevs_discovered": 3, 00:16:17.771 "num_base_bdevs_operational": 3, 00:16:17.771 "process": { 00:16:17.771 "type": "rebuild", 00:16:17.771 "target": "spare", 00:16:17.771 "progress": { 00:16:17.771 "blocks": 92160, 00:16:17.771 "percent": 72 00:16:17.771 } 00:16:17.771 }, 00:16:17.771 "base_bdevs_list": [ 00:16:17.771 { 00:16:17.771 "name": "spare", 00:16:17.771 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:17.771 "is_configured": true, 00:16:17.771 "data_offset": 2048, 00:16:17.771 "data_size": 63488 00:16:17.771 }, 00:16:17.771 { 00:16:17.771 "name": "BaseBdev2", 00:16:17.771 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:17.771 "is_configured": true, 00:16:17.771 "data_offset": 2048, 00:16:17.771 "data_size": 63488 00:16:17.771 }, 00:16:17.771 { 00:16:17.771 "name": "BaseBdev3", 00:16:17.771 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:17.771 "is_configured": true, 00:16:17.771 "data_offset": 2048, 00:16:17.771 "data_size": 63488 00:16:17.771 } 00:16:17.771 ] 00:16:17.771 }' 00:16:17.771 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.772 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.772 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.030 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.030 09:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.964 "name": "raid_bdev1", 00:16:18.964 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:18.964 "strip_size_kb": 64, 00:16:18.964 "state": "online", 00:16:18.964 "raid_level": "raid5f", 00:16:18.964 "superblock": true, 00:16:18.964 "num_base_bdevs": 3, 00:16:18.964 "num_base_bdevs_discovered": 3, 00:16:18.964 "num_base_bdevs_operational": 3, 00:16:18.964 "process": { 00:16:18.964 "type": "rebuild", 00:16:18.964 "target": "spare", 00:16:18.964 "progress": { 00:16:18.964 "blocks": 116736, 00:16:18.964 "percent": 91 00:16:18.964 } 00:16:18.964 }, 00:16:18.964 "base_bdevs_list": [ 00:16:18.964 { 00:16:18.964 "name": "spare", 00:16:18.964 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:18.964 "is_configured": true, 00:16:18.964 "data_offset": 2048, 00:16:18.964 "data_size": 63488 00:16:18.964 }, 00:16:18.964 { 00:16:18.964 "name": "BaseBdev2", 00:16:18.964 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:18.964 "is_configured": true, 00:16:18.964 "data_offset": 2048, 00:16:18.964 "data_size": 63488 00:16:18.964 }, 00:16:18.964 { 00:16:18.964 "name": "BaseBdev3", 00:16:18.964 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:18.964 "is_configured": true, 00:16:18.964 "data_offset": 2048, 00:16:18.964 "data_size": 63488 00:16:18.964 } 00:16:18.964 ] 00:16:18.964 }' 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.964 09:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.531 [2024-11-20 09:28:44.736812] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:19.531 [2024-11-20 09:28:44.736929] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:19.531 [2024-11-20 09:28:44.737102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.098 "name": "raid_bdev1", 00:16:20.098 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:20.098 "strip_size_kb": 64, 00:16:20.098 "state": "online", 00:16:20.098 "raid_level": "raid5f", 00:16:20.098 "superblock": true, 00:16:20.098 "num_base_bdevs": 3, 00:16:20.098 "num_base_bdevs_discovered": 3, 00:16:20.098 "num_base_bdevs_operational": 3, 00:16:20.098 "base_bdevs_list": [ 00:16:20.098 { 00:16:20.098 "name": "spare", 00:16:20.098 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:20.098 "is_configured": true, 00:16:20.098 "data_offset": 2048, 00:16:20.098 "data_size": 63488 00:16:20.098 }, 00:16:20.098 { 00:16:20.098 "name": "BaseBdev2", 00:16:20.098 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:20.098 "is_configured": true, 00:16:20.098 "data_offset": 2048, 00:16:20.098 "data_size": 63488 00:16:20.098 }, 00:16:20.098 { 00:16:20.098 "name": "BaseBdev3", 00:16:20.098 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:20.098 "is_configured": true, 00:16:20.098 "data_offset": 2048, 00:16:20.098 "data_size": 63488 00:16:20.098 } 00:16:20.098 ] 00:16:20.098 }' 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.098 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.357 "name": "raid_bdev1", 00:16:20.357 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:20.357 "strip_size_kb": 64, 00:16:20.357 "state": "online", 00:16:20.357 "raid_level": "raid5f", 00:16:20.357 "superblock": true, 00:16:20.357 "num_base_bdevs": 3, 00:16:20.357 "num_base_bdevs_discovered": 3, 00:16:20.357 "num_base_bdevs_operational": 3, 00:16:20.357 "base_bdevs_list": [ 00:16:20.357 { 00:16:20.357 "name": "spare", 00:16:20.357 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:20.357 "is_configured": true, 00:16:20.357 "data_offset": 2048, 00:16:20.357 "data_size": 63488 00:16:20.357 }, 00:16:20.357 { 00:16:20.357 "name": "BaseBdev2", 00:16:20.357 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:20.357 "is_configured": true, 00:16:20.357 "data_offset": 2048, 00:16:20.357 "data_size": 63488 00:16:20.357 }, 00:16:20.357 { 00:16:20.357 "name": "BaseBdev3", 00:16:20.357 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:20.357 "is_configured": true, 00:16:20.357 "data_offset": 2048, 00:16:20.357 "data_size": 63488 00:16:20.357 } 00:16:20.357 ] 00:16:20.357 }' 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.357 "name": "raid_bdev1", 00:16:20.357 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:20.357 "strip_size_kb": 64, 00:16:20.357 "state": "online", 00:16:20.357 "raid_level": "raid5f", 00:16:20.357 "superblock": true, 00:16:20.357 "num_base_bdevs": 3, 00:16:20.357 "num_base_bdevs_discovered": 3, 00:16:20.357 "num_base_bdevs_operational": 3, 00:16:20.357 "base_bdevs_list": [ 00:16:20.357 { 00:16:20.357 "name": "spare", 00:16:20.357 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:20.357 "is_configured": true, 00:16:20.357 "data_offset": 2048, 00:16:20.357 "data_size": 63488 00:16:20.357 }, 00:16:20.357 { 00:16:20.357 "name": "BaseBdev2", 00:16:20.357 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:20.357 "is_configured": true, 00:16:20.357 "data_offset": 2048, 00:16:20.357 "data_size": 63488 00:16:20.357 }, 00:16:20.357 { 00:16:20.357 "name": "BaseBdev3", 00:16:20.357 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:20.357 "is_configured": true, 00:16:20.357 "data_offset": 2048, 00:16:20.357 "data_size": 63488 00:16:20.357 } 00:16:20.357 ] 00:16:20.357 }' 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.357 09:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.924 [2024-11-20 09:28:46.127754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.924 [2024-11-20 09:28:46.127795] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.924 [2024-11-20 09:28:46.127908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.924 [2024-11-20 09:28:46.128017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.924 [2024-11-20 09:28:46.128042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:20.924 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:21.183 /dev/nbd0 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.183 1+0 records in 00:16:21.183 1+0 records out 00:16:21.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032508 s, 12.6 MB/s 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.183 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:21.442 /dev/nbd1 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.442 1+0 records in 00:16:21.442 1+0 records out 00:16:21.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327267 s, 12.5 MB/s 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.442 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:21.701 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:21.701 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.701 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:21.701 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.701 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:21.701 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.701 09:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.960 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.219 [2024-11-20 09:28:47.511531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:22.219 [2024-11-20 09:28:47.511602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.219 [2024-11-20 09:28:47.511625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:22.219 [2024-11-20 09:28:47.511638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.219 [2024-11-20 09:28:47.514310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.219 [2024-11-20 09:28:47.514360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:22.219 [2024-11-20 09:28:47.514477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:22.219 [2024-11-20 09:28:47.514558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.219 [2024-11-20 09:28:47.514767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.219 [2024-11-20 09:28:47.514900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.219 spare 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.219 [2024-11-20 09:28:47.614832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:22.219 [2024-11-20 09:28:47.614890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:22.219 [2024-11-20 09:28:47.615301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:22.219 [2024-11-20 09:28:47.621971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:22.219 [2024-11-20 09:28:47.621999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:22.219 [2024-11-20 09:28:47.622239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.219 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.478 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.478 "name": "raid_bdev1", 00:16:22.478 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:22.478 "strip_size_kb": 64, 00:16:22.478 "state": "online", 00:16:22.478 "raid_level": "raid5f", 00:16:22.478 "superblock": true, 00:16:22.478 "num_base_bdevs": 3, 00:16:22.478 "num_base_bdevs_discovered": 3, 00:16:22.478 "num_base_bdevs_operational": 3, 00:16:22.478 "base_bdevs_list": [ 00:16:22.478 { 00:16:22.478 "name": "spare", 00:16:22.478 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:22.478 "is_configured": true, 00:16:22.478 "data_offset": 2048, 00:16:22.478 "data_size": 63488 00:16:22.478 }, 00:16:22.478 { 00:16:22.478 "name": "BaseBdev2", 00:16:22.478 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:22.478 "is_configured": true, 00:16:22.478 "data_offset": 2048, 00:16:22.478 "data_size": 63488 00:16:22.478 }, 00:16:22.478 { 00:16:22.478 "name": "BaseBdev3", 00:16:22.478 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:22.478 "is_configured": true, 00:16:22.478 "data_offset": 2048, 00:16:22.478 "data_size": 63488 00:16:22.478 } 00:16:22.478 ] 00:16:22.478 }' 00:16:22.478 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.478 09:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.737 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.737 "name": "raid_bdev1", 00:16:22.737 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:22.737 "strip_size_kb": 64, 00:16:22.737 "state": "online", 00:16:22.737 "raid_level": "raid5f", 00:16:22.737 "superblock": true, 00:16:22.737 "num_base_bdevs": 3, 00:16:22.737 "num_base_bdevs_discovered": 3, 00:16:22.737 "num_base_bdevs_operational": 3, 00:16:22.737 "base_bdevs_list": [ 00:16:22.737 { 00:16:22.737 "name": "spare", 00:16:22.737 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:22.737 "is_configured": true, 00:16:22.737 "data_offset": 2048, 00:16:22.737 "data_size": 63488 00:16:22.737 }, 00:16:22.737 { 00:16:22.737 "name": "BaseBdev2", 00:16:22.737 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:22.737 "is_configured": true, 00:16:22.737 "data_offset": 2048, 00:16:22.737 "data_size": 63488 00:16:22.737 }, 00:16:22.737 { 00:16:22.737 "name": "BaseBdev3", 00:16:22.737 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:22.737 "is_configured": true, 00:16:22.737 "data_offset": 2048, 00:16:22.738 "data_size": 63488 00:16:22.738 } 00:16:22.738 ] 00:16:22.738 }' 00:16:22.738 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.738 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.738 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.012 [2024-11-20 09:28:48.265160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:23.012 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.013 "name": "raid_bdev1", 00:16:23.013 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:23.013 "strip_size_kb": 64, 00:16:23.013 "state": "online", 00:16:23.013 "raid_level": "raid5f", 00:16:23.013 "superblock": true, 00:16:23.013 "num_base_bdevs": 3, 00:16:23.013 "num_base_bdevs_discovered": 2, 00:16:23.013 "num_base_bdevs_operational": 2, 00:16:23.013 "base_bdevs_list": [ 00:16:23.013 { 00:16:23.013 "name": null, 00:16:23.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.013 "is_configured": false, 00:16:23.013 "data_offset": 0, 00:16:23.013 "data_size": 63488 00:16:23.013 }, 00:16:23.013 { 00:16:23.013 "name": "BaseBdev2", 00:16:23.013 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:23.013 "is_configured": true, 00:16:23.013 "data_offset": 2048, 00:16:23.013 "data_size": 63488 00:16:23.013 }, 00:16:23.013 { 00:16:23.013 "name": "BaseBdev3", 00:16:23.013 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:23.013 "is_configured": true, 00:16:23.013 "data_offset": 2048, 00:16:23.013 "data_size": 63488 00:16:23.013 } 00:16:23.013 ] 00:16:23.013 }' 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.013 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.271 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:23.272 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.272 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.272 [2024-11-20 09:28:48.724476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.272 [2024-11-20 09:28:48.724708] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:23.272 [2024-11-20 09:28:48.724746] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:23.272 [2024-11-20 09:28:48.724789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.530 [2024-11-20 09:28:48.743643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:23.530 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.530 09:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:23.530 [2024-11-20 09:28:48.752379] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.465 "name": "raid_bdev1", 00:16:24.465 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:24.465 "strip_size_kb": 64, 00:16:24.465 "state": "online", 00:16:24.465 "raid_level": "raid5f", 00:16:24.465 "superblock": true, 00:16:24.465 "num_base_bdevs": 3, 00:16:24.465 "num_base_bdevs_discovered": 3, 00:16:24.465 "num_base_bdevs_operational": 3, 00:16:24.465 "process": { 00:16:24.465 "type": "rebuild", 00:16:24.465 "target": "spare", 00:16:24.465 "progress": { 00:16:24.465 "blocks": 20480, 00:16:24.465 "percent": 16 00:16:24.465 } 00:16:24.465 }, 00:16:24.465 "base_bdevs_list": [ 00:16:24.465 { 00:16:24.465 "name": "spare", 00:16:24.465 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:24.465 "is_configured": true, 00:16:24.465 "data_offset": 2048, 00:16:24.465 "data_size": 63488 00:16:24.465 }, 00:16:24.465 { 00:16:24.465 "name": "BaseBdev2", 00:16:24.465 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:24.465 "is_configured": true, 00:16:24.465 "data_offset": 2048, 00:16:24.465 "data_size": 63488 00:16:24.465 }, 00:16:24.465 { 00:16:24.465 "name": "BaseBdev3", 00:16:24.465 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:24.465 "is_configured": true, 00:16:24.465 "data_offset": 2048, 00:16:24.465 "data_size": 63488 00:16:24.465 } 00:16:24.465 ] 00:16:24.465 }' 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.465 09:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.465 [2024-11-20 09:28:49.903890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:24.723 [2024-11-20 09:28:49.963969] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:24.723 [2024-11-20 09:28:49.964079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.723 [2024-11-20 09:28:49.964100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:24.723 [2024-11-20 09:28:49.964112] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.723 "name": "raid_bdev1", 00:16:24.723 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:24.723 "strip_size_kb": 64, 00:16:24.723 "state": "online", 00:16:24.723 "raid_level": "raid5f", 00:16:24.723 "superblock": true, 00:16:24.723 "num_base_bdevs": 3, 00:16:24.723 "num_base_bdevs_discovered": 2, 00:16:24.723 "num_base_bdevs_operational": 2, 00:16:24.723 "base_bdevs_list": [ 00:16:24.723 { 00:16:24.723 "name": null, 00:16:24.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.723 "is_configured": false, 00:16:24.723 "data_offset": 0, 00:16:24.723 "data_size": 63488 00:16:24.723 }, 00:16:24.723 { 00:16:24.723 "name": "BaseBdev2", 00:16:24.723 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:24.723 "is_configured": true, 00:16:24.723 "data_offset": 2048, 00:16:24.723 "data_size": 63488 00:16:24.723 }, 00:16:24.723 { 00:16:24.723 "name": "BaseBdev3", 00:16:24.723 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:24.723 "is_configured": true, 00:16:24.723 "data_offset": 2048, 00:16:24.723 "data_size": 63488 00:16:24.723 } 00:16:24.723 ] 00:16:24.723 }' 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.723 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.291 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:25.291 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.291 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.291 [2024-11-20 09:28:50.469675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:25.291 [2024-11-20 09:28:50.469754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.291 [2024-11-20 09:28:50.469781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:25.291 [2024-11-20 09:28:50.469799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.291 [2024-11-20 09:28:50.470390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.291 [2024-11-20 09:28:50.470443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:25.291 [2024-11-20 09:28:50.470565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:25.291 [2024-11-20 09:28:50.470593] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:25.291 [2024-11-20 09:28:50.470607] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:25.291 [2024-11-20 09:28:50.470636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.291 [2024-11-20 09:28:50.490199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:25.291 spare 00:16:25.291 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.291 09:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:25.291 [2024-11-20 09:28:50.500103] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.227 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.227 "name": "raid_bdev1", 00:16:26.227 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:26.228 "strip_size_kb": 64, 00:16:26.228 "state": "online", 00:16:26.228 "raid_level": "raid5f", 00:16:26.228 "superblock": true, 00:16:26.228 "num_base_bdevs": 3, 00:16:26.228 "num_base_bdevs_discovered": 3, 00:16:26.228 "num_base_bdevs_operational": 3, 00:16:26.228 "process": { 00:16:26.228 "type": "rebuild", 00:16:26.228 "target": "spare", 00:16:26.228 "progress": { 00:16:26.228 "blocks": 20480, 00:16:26.228 "percent": 16 00:16:26.228 } 00:16:26.228 }, 00:16:26.228 "base_bdevs_list": [ 00:16:26.228 { 00:16:26.228 "name": "spare", 00:16:26.228 "uuid": "34743212-c9e1-5c8e-8900-321a628edfb3", 00:16:26.228 "is_configured": true, 00:16:26.228 "data_offset": 2048, 00:16:26.228 "data_size": 63488 00:16:26.228 }, 00:16:26.228 { 00:16:26.228 "name": "BaseBdev2", 00:16:26.228 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:26.228 "is_configured": true, 00:16:26.228 "data_offset": 2048, 00:16:26.228 "data_size": 63488 00:16:26.228 }, 00:16:26.228 { 00:16:26.228 "name": "BaseBdev3", 00:16:26.228 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:26.228 "is_configured": true, 00:16:26.228 "data_offset": 2048, 00:16:26.228 "data_size": 63488 00:16:26.228 } 00:16:26.228 ] 00:16:26.228 }' 00:16:26.228 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.228 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.228 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.228 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.228 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:26.228 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.228 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.228 [2024-11-20 09:28:51.635824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.492 [2024-11-20 09:28:51.711952] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:26.492 [2024-11-20 09:28:51.712046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.492 [2024-11-20 09:28:51.712069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.492 [2024-11-20 09:28:51.712078] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.492 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.493 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.493 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.493 "name": "raid_bdev1", 00:16:26.493 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:26.493 "strip_size_kb": 64, 00:16:26.493 "state": "online", 00:16:26.493 "raid_level": "raid5f", 00:16:26.493 "superblock": true, 00:16:26.493 "num_base_bdevs": 3, 00:16:26.493 "num_base_bdevs_discovered": 2, 00:16:26.493 "num_base_bdevs_operational": 2, 00:16:26.493 "base_bdevs_list": [ 00:16:26.493 { 00:16:26.493 "name": null, 00:16:26.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.493 "is_configured": false, 00:16:26.493 "data_offset": 0, 00:16:26.493 "data_size": 63488 00:16:26.493 }, 00:16:26.493 { 00:16:26.493 "name": "BaseBdev2", 00:16:26.493 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:26.493 "is_configured": true, 00:16:26.493 "data_offset": 2048, 00:16:26.493 "data_size": 63488 00:16:26.493 }, 00:16:26.493 { 00:16:26.493 "name": "BaseBdev3", 00:16:26.493 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:26.493 "is_configured": true, 00:16:26.493 "data_offset": 2048, 00:16:26.493 "data_size": 63488 00:16:26.493 } 00:16:26.493 ] 00:16:26.493 }' 00:16:26.493 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.493 09:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.069 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.069 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.069 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.070 "name": "raid_bdev1", 00:16:27.070 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:27.070 "strip_size_kb": 64, 00:16:27.070 "state": "online", 00:16:27.070 "raid_level": "raid5f", 00:16:27.070 "superblock": true, 00:16:27.070 "num_base_bdevs": 3, 00:16:27.070 "num_base_bdevs_discovered": 2, 00:16:27.070 "num_base_bdevs_operational": 2, 00:16:27.070 "base_bdevs_list": [ 00:16:27.070 { 00:16:27.070 "name": null, 00:16:27.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.070 "is_configured": false, 00:16:27.070 "data_offset": 0, 00:16:27.070 "data_size": 63488 00:16:27.070 }, 00:16:27.070 { 00:16:27.070 "name": "BaseBdev2", 00:16:27.070 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:27.070 "is_configured": true, 00:16:27.070 "data_offset": 2048, 00:16:27.070 "data_size": 63488 00:16:27.070 }, 00:16:27.070 { 00:16:27.070 "name": "BaseBdev3", 00:16:27.070 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:27.070 "is_configured": true, 00:16:27.070 "data_offset": 2048, 00:16:27.070 "data_size": 63488 00:16:27.070 } 00:16:27.070 ] 00:16:27.070 }' 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 [2024-11-20 09:28:52.411391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:27.070 [2024-11-20 09:28:52.411483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.070 [2024-11-20 09:28:52.411516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:27.070 [2024-11-20 09:28:52.411529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.070 [2024-11-20 09:28:52.412090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.070 [2024-11-20 09:28:52.412120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:27.070 [2024-11-20 09:28:52.412228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:27.070 [2024-11-20 09:28:52.412250] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:27.070 [2024-11-20 09:28:52.412274] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:27.070 [2024-11-20 09:28:52.412287] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:27.070 BaseBdev1 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 09:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:28.007 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:28.007 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.008 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.267 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.267 "name": "raid_bdev1", 00:16:28.267 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:28.267 "strip_size_kb": 64, 00:16:28.267 "state": "online", 00:16:28.267 "raid_level": "raid5f", 00:16:28.267 "superblock": true, 00:16:28.267 "num_base_bdevs": 3, 00:16:28.267 "num_base_bdevs_discovered": 2, 00:16:28.267 "num_base_bdevs_operational": 2, 00:16:28.267 "base_bdevs_list": [ 00:16:28.267 { 00:16:28.267 "name": null, 00:16:28.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.267 "is_configured": false, 00:16:28.267 "data_offset": 0, 00:16:28.267 "data_size": 63488 00:16:28.267 }, 00:16:28.267 { 00:16:28.267 "name": "BaseBdev2", 00:16:28.267 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:28.267 "is_configured": true, 00:16:28.267 "data_offset": 2048, 00:16:28.267 "data_size": 63488 00:16:28.267 }, 00:16:28.267 { 00:16:28.267 "name": "BaseBdev3", 00:16:28.267 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:28.267 "is_configured": true, 00:16:28.267 "data_offset": 2048, 00:16:28.267 "data_size": 63488 00:16:28.267 } 00:16:28.267 ] 00:16:28.267 }' 00:16:28.267 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.268 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.527 "name": "raid_bdev1", 00:16:28.527 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:28.527 "strip_size_kb": 64, 00:16:28.527 "state": "online", 00:16:28.527 "raid_level": "raid5f", 00:16:28.527 "superblock": true, 00:16:28.527 "num_base_bdevs": 3, 00:16:28.527 "num_base_bdevs_discovered": 2, 00:16:28.527 "num_base_bdevs_operational": 2, 00:16:28.527 "base_bdevs_list": [ 00:16:28.527 { 00:16:28.527 "name": null, 00:16:28.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.527 "is_configured": false, 00:16:28.527 "data_offset": 0, 00:16:28.527 "data_size": 63488 00:16:28.527 }, 00:16:28.527 { 00:16:28.527 "name": "BaseBdev2", 00:16:28.527 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:28.527 "is_configured": true, 00:16:28.527 "data_offset": 2048, 00:16:28.527 "data_size": 63488 00:16:28.527 }, 00:16:28.527 { 00:16:28.527 "name": "BaseBdev3", 00:16:28.527 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:28.527 "is_configured": true, 00:16:28.527 "data_offset": 2048, 00:16:28.527 "data_size": 63488 00:16:28.527 } 00:16:28.527 ] 00:16:28.527 }' 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.527 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.786 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.786 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:28.786 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:28.786 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:28.787 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:28.787 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.787 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:28.787 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.787 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:28.787 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.787 09:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.787 [2024-11-20 09:28:53.996900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.787 [2024-11-20 09:28:53.997100] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:28.787 [2024-11-20 09:28:53.997128] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:28.787 request: 00:16:28.787 { 00:16:28.787 "base_bdev": "BaseBdev1", 00:16:28.787 "raid_bdev": "raid_bdev1", 00:16:28.787 "method": "bdev_raid_add_base_bdev", 00:16:28.787 "req_id": 1 00:16:28.787 } 00:16:28.787 Got JSON-RPC error response 00:16:28.787 response: 00:16:28.787 { 00:16:28.787 "code": -22, 00:16:28.787 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:28.787 } 00:16:28.787 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:28.787 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:28.787 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:28.787 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:28.787 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:28.787 09:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.723 "name": "raid_bdev1", 00:16:29.723 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:29.723 "strip_size_kb": 64, 00:16:29.723 "state": "online", 00:16:29.723 "raid_level": "raid5f", 00:16:29.723 "superblock": true, 00:16:29.723 "num_base_bdevs": 3, 00:16:29.723 "num_base_bdevs_discovered": 2, 00:16:29.723 "num_base_bdevs_operational": 2, 00:16:29.723 "base_bdevs_list": [ 00:16:29.723 { 00:16:29.723 "name": null, 00:16:29.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.723 "is_configured": false, 00:16:29.723 "data_offset": 0, 00:16:29.723 "data_size": 63488 00:16:29.723 }, 00:16:29.723 { 00:16:29.723 "name": "BaseBdev2", 00:16:29.723 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:29.723 "is_configured": true, 00:16:29.723 "data_offset": 2048, 00:16:29.723 "data_size": 63488 00:16:29.723 }, 00:16:29.723 { 00:16:29.723 "name": "BaseBdev3", 00:16:29.723 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:29.723 "is_configured": true, 00:16:29.723 "data_offset": 2048, 00:16:29.723 "data_size": 63488 00:16:29.723 } 00:16:29.723 ] 00:16:29.723 }' 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.723 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.297 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.297 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.297 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.297 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.297 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.297 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.297 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.297 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.298 "name": "raid_bdev1", 00:16:30.298 "uuid": "f70f1067-855a-4cbc-8568-73e16f253a22", 00:16:30.298 "strip_size_kb": 64, 00:16:30.298 "state": "online", 00:16:30.298 "raid_level": "raid5f", 00:16:30.298 "superblock": true, 00:16:30.298 "num_base_bdevs": 3, 00:16:30.298 "num_base_bdevs_discovered": 2, 00:16:30.298 "num_base_bdevs_operational": 2, 00:16:30.298 "base_bdevs_list": [ 00:16:30.298 { 00:16:30.298 "name": null, 00:16:30.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.298 "is_configured": false, 00:16:30.298 "data_offset": 0, 00:16:30.298 "data_size": 63488 00:16:30.298 }, 00:16:30.298 { 00:16:30.298 "name": "BaseBdev2", 00:16:30.298 "uuid": "6a01d9f8-c654-506e-b892-a879dec1f140", 00:16:30.298 "is_configured": true, 00:16:30.298 "data_offset": 2048, 00:16:30.298 "data_size": 63488 00:16:30.298 }, 00:16:30.298 { 00:16:30.298 "name": "BaseBdev3", 00:16:30.298 "uuid": "1548505d-67e5-5cb8-bc4f-df8007f4b477", 00:16:30.298 "is_configured": true, 00:16:30.298 "data_offset": 2048, 00:16:30.298 "data_size": 63488 00:16:30.298 } 00:16:30.298 ] 00:16:30.298 }' 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82431 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82431 ']' 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82431 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82431 00:16:30.298 killing process with pid 82431 00:16:30.298 Received shutdown signal, test time was about 60.000000 seconds 00:16:30.298 00:16:30.298 Latency(us) 00:16:30.298 [2024-11-20T09:28:55.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.298 [2024-11-20T09:28:55.754Z] =================================================================================================================== 00:16:30.298 [2024-11-20T09:28:55.754Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82431' 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82431 00:16:30.298 [2024-11-20 09:28:55.676277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.298 09:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82431 00:16:30.298 [2024-11-20 09:28:55.676459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.298 [2024-11-20 09:28:55.676540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.298 [2024-11-20 09:28:55.676555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:30.866 [2024-11-20 09:28:56.161525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.242 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:32.242 00:16:32.242 real 0m24.208s 00:16:32.242 user 0m31.112s 00:16:32.242 sys 0m2.868s 00:16:32.242 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.242 09:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.242 ************************************ 00:16:32.242 END TEST raid5f_rebuild_test_sb 00:16:32.242 ************************************ 00:16:32.242 09:28:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:32.242 09:28:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:32.242 09:28:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:32.242 09:28:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.242 09:28:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.242 ************************************ 00:16:32.242 START TEST raid5f_state_function_test 00:16:32.242 ************************************ 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83188 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:32.242 Process raid pid: 83188 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83188' 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83188 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83188 ']' 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.242 09:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.243 09:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.243 09:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.243 09:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.243 [2024-11-20 09:28:57.651713] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:16:32.243 [2024-11-20 09:28:57.652332] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.502 [2024-11-20 09:28:57.833798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.761 [2024-11-20 09:28:57.967165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.761 [2024-11-20 09:28:58.193043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.761 [2024-11-20 09:28:58.193101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.326 [2024-11-20 09:28:58.543442] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.326 [2024-11-20 09:28:58.543515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.326 [2024-11-20 09:28:58.543527] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.326 [2024-11-20 09:28:58.543539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.326 [2024-11-20 09:28:58.543546] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.326 [2024-11-20 09:28:58.543556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.326 [2024-11-20 09:28:58.543563] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:33.326 [2024-11-20 09:28:58.543572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.326 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.327 "name": "Existed_Raid", 00:16:33.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.327 "strip_size_kb": 64, 00:16:33.327 "state": "configuring", 00:16:33.327 "raid_level": "raid5f", 00:16:33.327 "superblock": false, 00:16:33.327 "num_base_bdevs": 4, 00:16:33.327 "num_base_bdevs_discovered": 0, 00:16:33.327 "num_base_bdevs_operational": 4, 00:16:33.327 "base_bdevs_list": [ 00:16:33.327 { 00:16:33.327 "name": "BaseBdev1", 00:16:33.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.327 "is_configured": false, 00:16:33.327 "data_offset": 0, 00:16:33.327 "data_size": 0 00:16:33.327 }, 00:16:33.327 { 00:16:33.327 "name": "BaseBdev2", 00:16:33.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.327 "is_configured": false, 00:16:33.327 "data_offset": 0, 00:16:33.327 "data_size": 0 00:16:33.327 }, 00:16:33.327 { 00:16:33.327 "name": "BaseBdev3", 00:16:33.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.327 "is_configured": false, 00:16:33.327 "data_offset": 0, 00:16:33.327 "data_size": 0 00:16:33.327 }, 00:16:33.327 { 00:16:33.327 "name": "BaseBdev4", 00:16:33.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.327 "is_configured": false, 00:16:33.327 "data_offset": 0, 00:16:33.327 "data_size": 0 00:16:33.327 } 00:16:33.327 ] 00:16:33.327 }' 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.327 09:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.585 [2024-11-20 09:28:59.014600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.585 [2024-11-20 09:28:59.014647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.585 [2024-11-20 09:28:59.026588] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.585 [2024-11-20 09:28:59.026648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.585 [2024-11-20 09:28:59.026662] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.585 [2024-11-20 09:28:59.026673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.585 [2024-11-20 09:28:59.026681] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.585 [2024-11-20 09:28:59.026692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.585 [2024-11-20 09:28:59.026699] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:33.585 [2024-11-20 09:28:59.026710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.585 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.844 [2024-11-20 09:28:59.075170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.844 BaseBdev1 00:16:33.844 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.844 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:33.844 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:33.844 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:33.844 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:33.844 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:33.844 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:33.844 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:33.844 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.845 [ 00:16:33.845 { 00:16:33.845 "name": "BaseBdev1", 00:16:33.845 "aliases": [ 00:16:33.845 "1a6e63d6-2af6-41f7-8090-a5851727b011" 00:16:33.845 ], 00:16:33.845 "product_name": "Malloc disk", 00:16:33.845 "block_size": 512, 00:16:33.845 "num_blocks": 65536, 00:16:33.845 "uuid": "1a6e63d6-2af6-41f7-8090-a5851727b011", 00:16:33.845 "assigned_rate_limits": { 00:16:33.845 "rw_ios_per_sec": 0, 00:16:33.845 "rw_mbytes_per_sec": 0, 00:16:33.845 "r_mbytes_per_sec": 0, 00:16:33.845 "w_mbytes_per_sec": 0 00:16:33.845 }, 00:16:33.845 "claimed": true, 00:16:33.845 "claim_type": "exclusive_write", 00:16:33.845 "zoned": false, 00:16:33.845 "supported_io_types": { 00:16:33.845 "read": true, 00:16:33.845 "write": true, 00:16:33.845 "unmap": true, 00:16:33.845 "flush": true, 00:16:33.845 "reset": true, 00:16:33.845 "nvme_admin": false, 00:16:33.845 "nvme_io": false, 00:16:33.845 "nvme_io_md": false, 00:16:33.845 "write_zeroes": true, 00:16:33.845 "zcopy": true, 00:16:33.845 "get_zone_info": false, 00:16:33.845 "zone_management": false, 00:16:33.845 "zone_append": false, 00:16:33.845 "compare": false, 00:16:33.845 "compare_and_write": false, 00:16:33.845 "abort": true, 00:16:33.845 "seek_hole": false, 00:16:33.845 "seek_data": false, 00:16:33.845 "copy": true, 00:16:33.845 "nvme_iov_md": false 00:16:33.845 }, 00:16:33.845 "memory_domains": [ 00:16:33.845 { 00:16:33.845 "dma_device_id": "system", 00:16:33.845 "dma_device_type": 1 00:16:33.845 }, 00:16:33.845 { 00:16:33.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.845 "dma_device_type": 2 00:16:33.845 } 00:16:33.845 ], 00:16:33.845 "driver_specific": {} 00:16:33.845 } 00:16:33.845 ] 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.845 "name": "Existed_Raid", 00:16:33.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.845 "strip_size_kb": 64, 00:16:33.845 "state": "configuring", 00:16:33.845 "raid_level": "raid5f", 00:16:33.845 "superblock": false, 00:16:33.845 "num_base_bdevs": 4, 00:16:33.845 "num_base_bdevs_discovered": 1, 00:16:33.845 "num_base_bdevs_operational": 4, 00:16:33.845 "base_bdevs_list": [ 00:16:33.845 { 00:16:33.845 "name": "BaseBdev1", 00:16:33.845 "uuid": "1a6e63d6-2af6-41f7-8090-a5851727b011", 00:16:33.845 "is_configured": true, 00:16:33.845 "data_offset": 0, 00:16:33.845 "data_size": 65536 00:16:33.845 }, 00:16:33.845 { 00:16:33.845 "name": "BaseBdev2", 00:16:33.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.845 "is_configured": false, 00:16:33.845 "data_offset": 0, 00:16:33.845 "data_size": 0 00:16:33.845 }, 00:16:33.845 { 00:16:33.845 "name": "BaseBdev3", 00:16:33.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.845 "is_configured": false, 00:16:33.845 "data_offset": 0, 00:16:33.845 "data_size": 0 00:16:33.845 }, 00:16:33.845 { 00:16:33.845 "name": "BaseBdev4", 00:16:33.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.845 "is_configured": false, 00:16:33.845 "data_offset": 0, 00:16:33.845 "data_size": 0 00:16:33.845 } 00:16:33.845 ] 00:16:33.845 }' 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.845 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.108 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:34.108 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.108 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 [2024-11-20 09:28:59.562409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.369 [2024-11-20 09:28:59.562491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 [2024-11-20 09:28:59.574486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.369 [2024-11-20 09:28:59.576640] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.369 [2024-11-20 09:28:59.576696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.369 [2024-11-20 09:28:59.576707] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.369 [2024-11-20 09:28:59.576720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.369 [2024-11-20 09:28:59.576729] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:34.369 [2024-11-20 09:28:59.576739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.369 "name": "Existed_Raid", 00:16:34.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.369 "strip_size_kb": 64, 00:16:34.369 "state": "configuring", 00:16:34.369 "raid_level": "raid5f", 00:16:34.369 "superblock": false, 00:16:34.369 "num_base_bdevs": 4, 00:16:34.369 "num_base_bdevs_discovered": 1, 00:16:34.369 "num_base_bdevs_operational": 4, 00:16:34.369 "base_bdevs_list": [ 00:16:34.369 { 00:16:34.369 "name": "BaseBdev1", 00:16:34.369 "uuid": "1a6e63d6-2af6-41f7-8090-a5851727b011", 00:16:34.369 "is_configured": true, 00:16:34.369 "data_offset": 0, 00:16:34.369 "data_size": 65536 00:16:34.369 }, 00:16:34.369 { 00:16:34.369 "name": "BaseBdev2", 00:16:34.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.369 "is_configured": false, 00:16:34.369 "data_offset": 0, 00:16:34.369 "data_size": 0 00:16:34.369 }, 00:16:34.369 { 00:16:34.369 "name": "BaseBdev3", 00:16:34.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.369 "is_configured": false, 00:16:34.369 "data_offset": 0, 00:16:34.369 "data_size": 0 00:16:34.369 }, 00:16:34.369 { 00:16:34.369 "name": "BaseBdev4", 00:16:34.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.369 "is_configured": false, 00:16:34.369 "data_offset": 0, 00:16:34.369 "data_size": 0 00:16:34.369 } 00:16:34.369 ] 00:16:34.369 }' 00:16:34.369 09:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.370 09:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.629 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:34.629 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.629 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.629 [2024-11-20 09:29:00.081977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.888 BaseBdev2 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.888 [ 00:16:34.888 { 00:16:34.888 "name": "BaseBdev2", 00:16:34.888 "aliases": [ 00:16:34.888 "558b67e1-d141-41a1-8a3d-9af572304c29" 00:16:34.888 ], 00:16:34.888 "product_name": "Malloc disk", 00:16:34.888 "block_size": 512, 00:16:34.888 "num_blocks": 65536, 00:16:34.888 "uuid": "558b67e1-d141-41a1-8a3d-9af572304c29", 00:16:34.888 "assigned_rate_limits": { 00:16:34.888 "rw_ios_per_sec": 0, 00:16:34.888 "rw_mbytes_per_sec": 0, 00:16:34.888 "r_mbytes_per_sec": 0, 00:16:34.888 "w_mbytes_per_sec": 0 00:16:34.888 }, 00:16:34.888 "claimed": true, 00:16:34.888 "claim_type": "exclusive_write", 00:16:34.888 "zoned": false, 00:16:34.888 "supported_io_types": { 00:16:34.888 "read": true, 00:16:34.888 "write": true, 00:16:34.888 "unmap": true, 00:16:34.888 "flush": true, 00:16:34.888 "reset": true, 00:16:34.888 "nvme_admin": false, 00:16:34.888 "nvme_io": false, 00:16:34.888 "nvme_io_md": false, 00:16:34.888 "write_zeroes": true, 00:16:34.888 "zcopy": true, 00:16:34.888 "get_zone_info": false, 00:16:34.888 "zone_management": false, 00:16:34.888 "zone_append": false, 00:16:34.888 "compare": false, 00:16:34.888 "compare_and_write": false, 00:16:34.888 "abort": true, 00:16:34.888 "seek_hole": false, 00:16:34.888 "seek_data": false, 00:16:34.888 "copy": true, 00:16:34.888 "nvme_iov_md": false 00:16:34.888 }, 00:16:34.888 "memory_domains": [ 00:16:34.888 { 00:16:34.888 "dma_device_id": "system", 00:16:34.888 "dma_device_type": 1 00:16:34.888 }, 00:16:34.888 { 00:16:34.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.888 "dma_device_type": 2 00:16:34.888 } 00:16:34.888 ], 00:16:34.888 "driver_specific": {} 00:16:34.888 } 00:16:34.888 ] 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.888 "name": "Existed_Raid", 00:16:34.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.888 "strip_size_kb": 64, 00:16:34.888 "state": "configuring", 00:16:34.888 "raid_level": "raid5f", 00:16:34.888 "superblock": false, 00:16:34.888 "num_base_bdevs": 4, 00:16:34.888 "num_base_bdevs_discovered": 2, 00:16:34.888 "num_base_bdevs_operational": 4, 00:16:34.888 "base_bdevs_list": [ 00:16:34.888 { 00:16:34.888 "name": "BaseBdev1", 00:16:34.888 "uuid": "1a6e63d6-2af6-41f7-8090-a5851727b011", 00:16:34.888 "is_configured": true, 00:16:34.888 "data_offset": 0, 00:16:34.888 "data_size": 65536 00:16:34.888 }, 00:16:34.888 { 00:16:34.888 "name": "BaseBdev2", 00:16:34.888 "uuid": "558b67e1-d141-41a1-8a3d-9af572304c29", 00:16:34.888 "is_configured": true, 00:16:34.888 "data_offset": 0, 00:16:34.888 "data_size": 65536 00:16:34.888 }, 00:16:34.888 { 00:16:34.888 "name": "BaseBdev3", 00:16:34.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.888 "is_configured": false, 00:16:34.888 "data_offset": 0, 00:16:34.888 "data_size": 0 00:16:34.888 }, 00:16:34.888 { 00:16:34.888 "name": "BaseBdev4", 00:16:34.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.888 "is_configured": false, 00:16:34.888 "data_offset": 0, 00:16:34.888 "data_size": 0 00:16:34.888 } 00:16:34.888 ] 00:16:34.888 }' 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.888 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.147 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:35.147 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.407 [2024-11-20 09:29:00.653627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.407 BaseBdev3 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.407 [ 00:16:35.407 { 00:16:35.407 "name": "BaseBdev3", 00:16:35.407 "aliases": [ 00:16:35.407 "105a6092-1dcd-46c8-a814-6bf4ca23bccb" 00:16:35.407 ], 00:16:35.407 "product_name": "Malloc disk", 00:16:35.407 "block_size": 512, 00:16:35.407 "num_blocks": 65536, 00:16:35.407 "uuid": "105a6092-1dcd-46c8-a814-6bf4ca23bccb", 00:16:35.407 "assigned_rate_limits": { 00:16:35.407 "rw_ios_per_sec": 0, 00:16:35.407 "rw_mbytes_per_sec": 0, 00:16:35.407 "r_mbytes_per_sec": 0, 00:16:35.407 "w_mbytes_per_sec": 0 00:16:35.407 }, 00:16:35.407 "claimed": true, 00:16:35.407 "claim_type": "exclusive_write", 00:16:35.407 "zoned": false, 00:16:35.407 "supported_io_types": { 00:16:35.407 "read": true, 00:16:35.407 "write": true, 00:16:35.407 "unmap": true, 00:16:35.407 "flush": true, 00:16:35.407 "reset": true, 00:16:35.407 "nvme_admin": false, 00:16:35.407 "nvme_io": false, 00:16:35.407 "nvme_io_md": false, 00:16:35.407 "write_zeroes": true, 00:16:35.407 "zcopy": true, 00:16:35.407 "get_zone_info": false, 00:16:35.407 "zone_management": false, 00:16:35.407 "zone_append": false, 00:16:35.407 "compare": false, 00:16:35.407 "compare_and_write": false, 00:16:35.407 "abort": true, 00:16:35.407 "seek_hole": false, 00:16:35.407 "seek_data": false, 00:16:35.407 "copy": true, 00:16:35.407 "nvme_iov_md": false 00:16:35.407 }, 00:16:35.407 "memory_domains": [ 00:16:35.407 { 00:16:35.407 "dma_device_id": "system", 00:16:35.407 "dma_device_type": 1 00:16:35.407 }, 00:16:35.407 { 00:16:35.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.407 "dma_device_type": 2 00:16:35.407 } 00:16:35.407 ], 00:16:35.407 "driver_specific": {} 00:16:35.407 } 00:16:35.407 ] 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.407 "name": "Existed_Raid", 00:16:35.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.407 "strip_size_kb": 64, 00:16:35.407 "state": "configuring", 00:16:35.407 "raid_level": "raid5f", 00:16:35.407 "superblock": false, 00:16:35.407 "num_base_bdevs": 4, 00:16:35.407 "num_base_bdevs_discovered": 3, 00:16:35.407 "num_base_bdevs_operational": 4, 00:16:35.407 "base_bdevs_list": [ 00:16:35.407 { 00:16:35.407 "name": "BaseBdev1", 00:16:35.407 "uuid": "1a6e63d6-2af6-41f7-8090-a5851727b011", 00:16:35.407 "is_configured": true, 00:16:35.407 "data_offset": 0, 00:16:35.407 "data_size": 65536 00:16:35.407 }, 00:16:35.407 { 00:16:35.407 "name": "BaseBdev2", 00:16:35.407 "uuid": "558b67e1-d141-41a1-8a3d-9af572304c29", 00:16:35.407 "is_configured": true, 00:16:35.407 "data_offset": 0, 00:16:35.407 "data_size": 65536 00:16:35.407 }, 00:16:35.407 { 00:16:35.407 "name": "BaseBdev3", 00:16:35.407 "uuid": "105a6092-1dcd-46c8-a814-6bf4ca23bccb", 00:16:35.407 "is_configured": true, 00:16:35.407 "data_offset": 0, 00:16:35.407 "data_size": 65536 00:16:35.407 }, 00:16:35.407 { 00:16:35.407 "name": "BaseBdev4", 00:16:35.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.407 "is_configured": false, 00:16:35.407 "data_offset": 0, 00:16:35.407 "data_size": 0 00:16:35.407 } 00:16:35.407 ] 00:16:35.407 }' 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.407 09:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.975 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:35.975 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.975 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.975 [2024-11-20 09:29:01.211145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:35.976 [2024-11-20 09:29:01.211334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:35.976 [2024-11-20 09:29:01.211365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:35.976 [2024-11-20 09:29:01.211724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:35.976 [2024-11-20 09:29:01.219309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:35.976 [2024-11-20 09:29:01.219386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:35.976 [2024-11-20 09:29:01.219783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.976 BaseBdev4 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.976 [ 00:16:35.976 { 00:16:35.976 "name": "BaseBdev4", 00:16:35.976 "aliases": [ 00:16:35.976 "7abbf579-97b8-4248-a9ad-369935aa18ec" 00:16:35.976 ], 00:16:35.976 "product_name": "Malloc disk", 00:16:35.976 "block_size": 512, 00:16:35.976 "num_blocks": 65536, 00:16:35.976 "uuid": "7abbf579-97b8-4248-a9ad-369935aa18ec", 00:16:35.976 "assigned_rate_limits": { 00:16:35.976 "rw_ios_per_sec": 0, 00:16:35.976 "rw_mbytes_per_sec": 0, 00:16:35.976 "r_mbytes_per_sec": 0, 00:16:35.976 "w_mbytes_per_sec": 0 00:16:35.976 }, 00:16:35.976 "claimed": true, 00:16:35.976 "claim_type": "exclusive_write", 00:16:35.976 "zoned": false, 00:16:35.976 "supported_io_types": { 00:16:35.976 "read": true, 00:16:35.976 "write": true, 00:16:35.976 "unmap": true, 00:16:35.976 "flush": true, 00:16:35.976 "reset": true, 00:16:35.976 "nvme_admin": false, 00:16:35.976 "nvme_io": false, 00:16:35.976 "nvme_io_md": false, 00:16:35.976 "write_zeroes": true, 00:16:35.976 "zcopy": true, 00:16:35.976 "get_zone_info": false, 00:16:35.976 "zone_management": false, 00:16:35.976 "zone_append": false, 00:16:35.976 "compare": false, 00:16:35.976 "compare_and_write": false, 00:16:35.976 "abort": true, 00:16:35.976 "seek_hole": false, 00:16:35.976 "seek_data": false, 00:16:35.976 "copy": true, 00:16:35.976 "nvme_iov_md": false 00:16:35.976 }, 00:16:35.976 "memory_domains": [ 00:16:35.976 { 00:16:35.976 "dma_device_id": "system", 00:16:35.976 "dma_device_type": 1 00:16:35.976 }, 00:16:35.976 { 00:16:35.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.976 "dma_device_type": 2 00:16:35.976 } 00:16:35.976 ], 00:16:35.976 "driver_specific": {} 00:16:35.976 } 00:16:35.976 ] 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.976 "name": "Existed_Raid", 00:16:35.976 "uuid": "4fa2361a-70b1-452e-b83d-3216e4a62d7e", 00:16:35.976 "strip_size_kb": 64, 00:16:35.976 "state": "online", 00:16:35.976 "raid_level": "raid5f", 00:16:35.976 "superblock": false, 00:16:35.976 "num_base_bdevs": 4, 00:16:35.976 "num_base_bdevs_discovered": 4, 00:16:35.976 "num_base_bdevs_operational": 4, 00:16:35.976 "base_bdevs_list": [ 00:16:35.976 { 00:16:35.976 "name": "BaseBdev1", 00:16:35.976 "uuid": "1a6e63d6-2af6-41f7-8090-a5851727b011", 00:16:35.976 "is_configured": true, 00:16:35.976 "data_offset": 0, 00:16:35.976 "data_size": 65536 00:16:35.976 }, 00:16:35.976 { 00:16:35.976 "name": "BaseBdev2", 00:16:35.976 "uuid": "558b67e1-d141-41a1-8a3d-9af572304c29", 00:16:35.976 "is_configured": true, 00:16:35.976 "data_offset": 0, 00:16:35.976 "data_size": 65536 00:16:35.976 }, 00:16:35.976 { 00:16:35.976 "name": "BaseBdev3", 00:16:35.976 "uuid": "105a6092-1dcd-46c8-a814-6bf4ca23bccb", 00:16:35.976 "is_configured": true, 00:16:35.976 "data_offset": 0, 00:16:35.976 "data_size": 65536 00:16:35.976 }, 00:16:35.976 { 00:16:35.976 "name": "BaseBdev4", 00:16:35.976 "uuid": "7abbf579-97b8-4248-a9ad-369935aa18ec", 00:16:35.976 "is_configured": true, 00:16:35.976 "data_offset": 0, 00:16:35.976 "data_size": 65536 00:16:35.976 } 00:16:35.976 ] 00:16:35.976 }' 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.976 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.543 [2024-11-20 09:29:01.708869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.543 "name": "Existed_Raid", 00:16:36.543 "aliases": [ 00:16:36.543 "4fa2361a-70b1-452e-b83d-3216e4a62d7e" 00:16:36.543 ], 00:16:36.543 "product_name": "Raid Volume", 00:16:36.543 "block_size": 512, 00:16:36.543 "num_blocks": 196608, 00:16:36.543 "uuid": "4fa2361a-70b1-452e-b83d-3216e4a62d7e", 00:16:36.543 "assigned_rate_limits": { 00:16:36.543 "rw_ios_per_sec": 0, 00:16:36.543 "rw_mbytes_per_sec": 0, 00:16:36.543 "r_mbytes_per_sec": 0, 00:16:36.543 "w_mbytes_per_sec": 0 00:16:36.543 }, 00:16:36.543 "claimed": false, 00:16:36.543 "zoned": false, 00:16:36.543 "supported_io_types": { 00:16:36.543 "read": true, 00:16:36.543 "write": true, 00:16:36.543 "unmap": false, 00:16:36.543 "flush": false, 00:16:36.543 "reset": true, 00:16:36.543 "nvme_admin": false, 00:16:36.543 "nvme_io": false, 00:16:36.543 "nvme_io_md": false, 00:16:36.543 "write_zeroes": true, 00:16:36.543 "zcopy": false, 00:16:36.543 "get_zone_info": false, 00:16:36.543 "zone_management": false, 00:16:36.543 "zone_append": false, 00:16:36.543 "compare": false, 00:16:36.543 "compare_and_write": false, 00:16:36.543 "abort": false, 00:16:36.543 "seek_hole": false, 00:16:36.543 "seek_data": false, 00:16:36.543 "copy": false, 00:16:36.543 "nvme_iov_md": false 00:16:36.543 }, 00:16:36.543 "driver_specific": { 00:16:36.543 "raid": { 00:16:36.543 "uuid": "4fa2361a-70b1-452e-b83d-3216e4a62d7e", 00:16:36.543 "strip_size_kb": 64, 00:16:36.543 "state": "online", 00:16:36.543 "raid_level": "raid5f", 00:16:36.543 "superblock": false, 00:16:36.543 "num_base_bdevs": 4, 00:16:36.543 "num_base_bdevs_discovered": 4, 00:16:36.543 "num_base_bdevs_operational": 4, 00:16:36.543 "base_bdevs_list": [ 00:16:36.543 { 00:16:36.543 "name": "BaseBdev1", 00:16:36.543 "uuid": "1a6e63d6-2af6-41f7-8090-a5851727b011", 00:16:36.543 "is_configured": true, 00:16:36.543 "data_offset": 0, 00:16:36.543 "data_size": 65536 00:16:36.543 }, 00:16:36.543 { 00:16:36.543 "name": "BaseBdev2", 00:16:36.543 "uuid": "558b67e1-d141-41a1-8a3d-9af572304c29", 00:16:36.543 "is_configured": true, 00:16:36.543 "data_offset": 0, 00:16:36.543 "data_size": 65536 00:16:36.543 }, 00:16:36.543 { 00:16:36.543 "name": "BaseBdev3", 00:16:36.543 "uuid": "105a6092-1dcd-46c8-a814-6bf4ca23bccb", 00:16:36.543 "is_configured": true, 00:16:36.543 "data_offset": 0, 00:16:36.543 "data_size": 65536 00:16:36.543 }, 00:16:36.543 { 00:16:36.543 "name": "BaseBdev4", 00:16:36.543 "uuid": "7abbf579-97b8-4248-a9ad-369935aa18ec", 00:16:36.543 "is_configured": true, 00:16:36.543 "data_offset": 0, 00:16:36.543 "data_size": 65536 00:16:36.543 } 00:16:36.543 ] 00:16:36.543 } 00:16:36.543 } 00:16:36.543 }' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:36.543 BaseBdev2 00:16:36.543 BaseBdev3 00:16:36.543 BaseBdev4' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.543 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.544 09:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.544 [2024-11-20 09:29:01.984238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.803 "name": "Existed_Raid", 00:16:36.803 "uuid": "4fa2361a-70b1-452e-b83d-3216e4a62d7e", 00:16:36.803 "strip_size_kb": 64, 00:16:36.803 "state": "online", 00:16:36.803 "raid_level": "raid5f", 00:16:36.803 "superblock": false, 00:16:36.803 "num_base_bdevs": 4, 00:16:36.803 "num_base_bdevs_discovered": 3, 00:16:36.803 "num_base_bdevs_operational": 3, 00:16:36.803 "base_bdevs_list": [ 00:16:36.803 { 00:16:36.803 "name": null, 00:16:36.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.803 "is_configured": false, 00:16:36.803 "data_offset": 0, 00:16:36.803 "data_size": 65536 00:16:36.803 }, 00:16:36.803 { 00:16:36.803 "name": "BaseBdev2", 00:16:36.803 "uuid": "558b67e1-d141-41a1-8a3d-9af572304c29", 00:16:36.803 "is_configured": true, 00:16:36.803 "data_offset": 0, 00:16:36.803 "data_size": 65536 00:16:36.803 }, 00:16:36.803 { 00:16:36.803 "name": "BaseBdev3", 00:16:36.803 "uuid": "105a6092-1dcd-46c8-a814-6bf4ca23bccb", 00:16:36.803 "is_configured": true, 00:16:36.803 "data_offset": 0, 00:16:36.803 "data_size": 65536 00:16:36.803 }, 00:16:36.803 { 00:16:36.803 "name": "BaseBdev4", 00:16:36.803 "uuid": "7abbf579-97b8-4248-a9ad-369935aa18ec", 00:16:36.803 "is_configured": true, 00:16:36.803 "data_offset": 0, 00:16:36.803 "data_size": 65536 00:16:36.803 } 00:16:36.803 ] 00:16:36.803 }' 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.803 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.371 [2024-11-20 09:29:02.603708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.371 [2024-11-20 09:29:02.603827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.371 [2024-11-20 09:29:02.708711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.371 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.371 [2024-11-20 09:29:02.764677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.630 09:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.630 [2024-11-20 09:29:02.929232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:37.630 [2024-11-20 09:29:02.929395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:37.630 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.630 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:37.630 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.630 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.630 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:37.630 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.630 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.630 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.889 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:37.889 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:37.889 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:37.889 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:37.889 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 BaseBdev2 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 [ 00:16:37.890 { 00:16:37.890 "name": "BaseBdev2", 00:16:37.890 "aliases": [ 00:16:37.890 "bfe5900b-0d07-4107-9d8f-75a38518b1d0" 00:16:37.890 ], 00:16:37.890 "product_name": "Malloc disk", 00:16:37.890 "block_size": 512, 00:16:37.890 "num_blocks": 65536, 00:16:37.890 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:37.890 "assigned_rate_limits": { 00:16:37.890 "rw_ios_per_sec": 0, 00:16:37.890 "rw_mbytes_per_sec": 0, 00:16:37.890 "r_mbytes_per_sec": 0, 00:16:37.890 "w_mbytes_per_sec": 0 00:16:37.890 }, 00:16:37.890 "claimed": false, 00:16:37.890 "zoned": false, 00:16:37.890 "supported_io_types": { 00:16:37.890 "read": true, 00:16:37.890 "write": true, 00:16:37.890 "unmap": true, 00:16:37.890 "flush": true, 00:16:37.890 "reset": true, 00:16:37.890 "nvme_admin": false, 00:16:37.890 "nvme_io": false, 00:16:37.890 "nvme_io_md": false, 00:16:37.890 "write_zeroes": true, 00:16:37.890 "zcopy": true, 00:16:37.890 "get_zone_info": false, 00:16:37.890 "zone_management": false, 00:16:37.890 "zone_append": false, 00:16:37.890 "compare": false, 00:16:37.890 "compare_and_write": false, 00:16:37.890 "abort": true, 00:16:37.890 "seek_hole": false, 00:16:37.890 "seek_data": false, 00:16:37.890 "copy": true, 00:16:37.890 "nvme_iov_md": false 00:16:37.890 }, 00:16:37.890 "memory_domains": [ 00:16:37.890 { 00:16:37.890 "dma_device_id": "system", 00:16:37.890 "dma_device_type": 1 00:16:37.890 }, 00:16:37.890 { 00:16:37.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.890 "dma_device_type": 2 00:16:37.890 } 00:16:37.890 ], 00:16:37.890 "driver_specific": {} 00:16:37.890 } 00:16:37.890 ] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 BaseBdev3 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 [ 00:16:37.890 { 00:16:37.890 "name": "BaseBdev3", 00:16:37.890 "aliases": [ 00:16:37.890 "ac4fe083-13cc-4580-8a55-0a8f5acfd108" 00:16:37.890 ], 00:16:37.890 "product_name": "Malloc disk", 00:16:37.890 "block_size": 512, 00:16:37.890 "num_blocks": 65536, 00:16:37.890 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:37.890 "assigned_rate_limits": { 00:16:37.890 "rw_ios_per_sec": 0, 00:16:37.890 "rw_mbytes_per_sec": 0, 00:16:37.890 "r_mbytes_per_sec": 0, 00:16:37.890 "w_mbytes_per_sec": 0 00:16:37.890 }, 00:16:37.890 "claimed": false, 00:16:37.890 "zoned": false, 00:16:37.890 "supported_io_types": { 00:16:37.890 "read": true, 00:16:37.890 "write": true, 00:16:37.890 "unmap": true, 00:16:37.890 "flush": true, 00:16:37.890 "reset": true, 00:16:37.890 "nvme_admin": false, 00:16:37.890 "nvme_io": false, 00:16:37.890 "nvme_io_md": false, 00:16:37.890 "write_zeroes": true, 00:16:37.890 "zcopy": true, 00:16:37.890 "get_zone_info": false, 00:16:37.890 "zone_management": false, 00:16:37.890 "zone_append": false, 00:16:37.890 "compare": false, 00:16:37.890 "compare_and_write": false, 00:16:37.890 "abort": true, 00:16:37.890 "seek_hole": false, 00:16:37.890 "seek_data": false, 00:16:37.890 "copy": true, 00:16:37.890 "nvme_iov_md": false 00:16:37.890 }, 00:16:37.890 "memory_domains": [ 00:16:37.890 { 00:16:37.890 "dma_device_id": "system", 00:16:37.890 "dma_device_type": 1 00:16:37.890 }, 00:16:37.890 { 00:16:37.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.890 "dma_device_type": 2 00:16:37.890 } 00:16:37.890 ], 00:16:37.890 "driver_specific": {} 00:16:37.890 } 00:16:37.890 ] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 BaseBdev4 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.890 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 [ 00:16:37.890 { 00:16:37.890 "name": "BaseBdev4", 00:16:37.890 "aliases": [ 00:16:37.890 "eebb31b9-510c-4ca4-a36a-e47e5d47b524" 00:16:37.890 ], 00:16:37.890 "product_name": "Malloc disk", 00:16:37.890 "block_size": 512, 00:16:37.890 "num_blocks": 65536, 00:16:37.890 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:37.890 "assigned_rate_limits": { 00:16:37.890 "rw_ios_per_sec": 0, 00:16:37.890 "rw_mbytes_per_sec": 0, 00:16:37.890 "r_mbytes_per_sec": 0, 00:16:37.890 "w_mbytes_per_sec": 0 00:16:37.890 }, 00:16:37.891 "claimed": false, 00:16:37.891 "zoned": false, 00:16:37.891 "supported_io_types": { 00:16:37.891 "read": true, 00:16:37.891 "write": true, 00:16:37.891 "unmap": true, 00:16:37.891 "flush": true, 00:16:37.891 "reset": true, 00:16:37.891 "nvme_admin": false, 00:16:37.891 "nvme_io": false, 00:16:37.891 "nvme_io_md": false, 00:16:37.891 "write_zeroes": true, 00:16:37.891 "zcopy": true, 00:16:37.891 "get_zone_info": false, 00:16:37.891 "zone_management": false, 00:16:37.891 "zone_append": false, 00:16:37.891 "compare": false, 00:16:37.891 "compare_and_write": false, 00:16:37.891 "abort": true, 00:16:37.891 "seek_hole": false, 00:16:37.891 "seek_data": false, 00:16:37.891 "copy": true, 00:16:37.891 "nvme_iov_md": false 00:16:37.891 }, 00:16:37.891 "memory_domains": [ 00:16:37.891 { 00:16:37.891 "dma_device_id": "system", 00:16:37.891 "dma_device_type": 1 00:16:37.891 }, 00:16:37.891 { 00:16:37.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.891 "dma_device_type": 2 00:16:37.891 } 00:16:37.891 ], 00:16:37.891 "driver_specific": {} 00:16:37.891 } 00:16:37.891 ] 00:16:37.891 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.891 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:37.891 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:37.891 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:37.891 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:37.891 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.891 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.149 [2024-11-20 09:29:03.343256] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.149 [2024-11-20 09:29:03.343355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.149 [2024-11-20 09:29:03.343409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.149 [2024-11-20 09:29:03.345536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.149 [2024-11-20 09:29:03.345641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.149 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.149 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.149 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.149 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.150 "name": "Existed_Raid", 00:16:38.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.150 "strip_size_kb": 64, 00:16:38.150 "state": "configuring", 00:16:38.150 "raid_level": "raid5f", 00:16:38.150 "superblock": false, 00:16:38.150 "num_base_bdevs": 4, 00:16:38.150 "num_base_bdevs_discovered": 3, 00:16:38.150 "num_base_bdevs_operational": 4, 00:16:38.150 "base_bdevs_list": [ 00:16:38.150 { 00:16:38.150 "name": "BaseBdev1", 00:16:38.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.150 "is_configured": false, 00:16:38.150 "data_offset": 0, 00:16:38.150 "data_size": 0 00:16:38.150 }, 00:16:38.150 { 00:16:38.150 "name": "BaseBdev2", 00:16:38.150 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:38.150 "is_configured": true, 00:16:38.150 "data_offset": 0, 00:16:38.150 "data_size": 65536 00:16:38.150 }, 00:16:38.150 { 00:16:38.150 "name": "BaseBdev3", 00:16:38.150 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:38.150 "is_configured": true, 00:16:38.150 "data_offset": 0, 00:16:38.150 "data_size": 65536 00:16:38.150 }, 00:16:38.150 { 00:16:38.150 "name": "BaseBdev4", 00:16:38.150 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:38.150 "is_configured": true, 00:16:38.150 "data_offset": 0, 00:16:38.150 "data_size": 65536 00:16:38.150 } 00:16:38.150 ] 00:16:38.150 }' 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.150 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.408 [2024-11-20 09:29:03.774505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.408 "name": "Existed_Raid", 00:16:38.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.408 "strip_size_kb": 64, 00:16:38.408 "state": "configuring", 00:16:38.408 "raid_level": "raid5f", 00:16:38.408 "superblock": false, 00:16:38.408 "num_base_bdevs": 4, 00:16:38.408 "num_base_bdevs_discovered": 2, 00:16:38.408 "num_base_bdevs_operational": 4, 00:16:38.408 "base_bdevs_list": [ 00:16:38.408 { 00:16:38.408 "name": "BaseBdev1", 00:16:38.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.408 "is_configured": false, 00:16:38.408 "data_offset": 0, 00:16:38.408 "data_size": 0 00:16:38.408 }, 00:16:38.408 { 00:16:38.408 "name": null, 00:16:38.408 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:38.408 "is_configured": false, 00:16:38.408 "data_offset": 0, 00:16:38.408 "data_size": 65536 00:16:38.408 }, 00:16:38.408 { 00:16:38.408 "name": "BaseBdev3", 00:16:38.408 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:38.408 "is_configured": true, 00:16:38.408 "data_offset": 0, 00:16:38.408 "data_size": 65536 00:16:38.408 }, 00:16:38.408 { 00:16:38.408 "name": "BaseBdev4", 00:16:38.408 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:38.408 "is_configured": true, 00:16:38.408 "data_offset": 0, 00:16:38.408 "data_size": 65536 00:16:38.408 } 00:16:38.408 ] 00:16:38.408 }' 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.408 09:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.976 [2024-11-20 09:29:04.298634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.976 BaseBdev1 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.976 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.976 [ 00:16:38.976 { 00:16:38.976 "name": "BaseBdev1", 00:16:38.976 "aliases": [ 00:16:38.976 "824c7f03-c8cf-4b22-b9ee-a5a7d8842508" 00:16:38.976 ], 00:16:38.976 "product_name": "Malloc disk", 00:16:38.976 "block_size": 512, 00:16:38.977 "num_blocks": 65536, 00:16:38.977 "uuid": "824c7f03-c8cf-4b22-b9ee-a5a7d8842508", 00:16:38.977 "assigned_rate_limits": { 00:16:38.977 "rw_ios_per_sec": 0, 00:16:38.977 "rw_mbytes_per_sec": 0, 00:16:38.977 "r_mbytes_per_sec": 0, 00:16:38.977 "w_mbytes_per_sec": 0 00:16:38.977 }, 00:16:38.977 "claimed": true, 00:16:38.977 "claim_type": "exclusive_write", 00:16:38.977 "zoned": false, 00:16:38.977 "supported_io_types": { 00:16:38.977 "read": true, 00:16:38.977 "write": true, 00:16:38.977 "unmap": true, 00:16:38.977 "flush": true, 00:16:38.977 "reset": true, 00:16:38.977 "nvme_admin": false, 00:16:38.977 "nvme_io": false, 00:16:38.977 "nvme_io_md": false, 00:16:38.977 "write_zeroes": true, 00:16:38.977 "zcopy": true, 00:16:38.977 "get_zone_info": false, 00:16:38.977 "zone_management": false, 00:16:38.977 "zone_append": false, 00:16:38.977 "compare": false, 00:16:38.977 "compare_and_write": false, 00:16:38.977 "abort": true, 00:16:38.977 "seek_hole": false, 00:16:38.977 "seek_data": false, 00:16:38.977 "copy": true, 00:16:38.977 "nvme_iov_md": false 00:16:38.977 }, 00:16:38.977 "memory_domains": [ 00:16:38.977 { 00:16:38.977 "dma_device_id": "system", 00:16:38.977 "dma_device_type": 1 00:16:38.977 }, 00:16:38.977 { 00:16:38.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.977 "dma_device_type": 2 00:16:38.977 } 00:16:38.977 ], 00:16:38.977 "driver_specific": {} 00:16:38.977 } 00:16:38.977 ] 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.977 "name": "Existed_Raid", 00:16:38.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.977 "strip_size_kb": 64, 00:16:38.977 "state": "configuring", 00:16:38.977 "raid_level": "raid5f", 00:16:38.977 "superblock": false, 00:16:38.977 "num_base_bdevs": 4, 00:16:38.977 "num_base_bdevs_discovered": 3, 00:16:38.977 "num_base_bdevs_operational": 4, 00:16:38.977 "base_bdevs_list": [ 00:16:38.977 { 00:16:38.977 "name": "BaseBdev1", 00:16:38.977 "uuid": "824c7f03-c8cf-4b22-b9ee-a5a7d8842508", 00:16:38.977 "is_configured": true, 00:16:38.977 "data_offset": 0, 00:16:38.977 "data_size": 65536 00:16:38.977 }, 00:16:38.977 { 00:16:38.977 "name": null, 00:16:38.977 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:38.977 "is_configured": false, 00:16:38.977 "data_offset": 0, 00:16:38.977 "data_size": 65536 00:16:38.977 }, 00:16:38.977 { 00:16:38.977 "name": "BaseBdev3", 00:16:38.977 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:38.977 "is_configured": true, 00:16:38.977 "data_offset": 0, 00:16:38.977 "data_size": 65536 00:16:38.977 }, 00:16:38.977 { 00:16:38.977 "name": "BaseBdev4", 00:16:38.977 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:38.977 "is_configured": true, 00:16:38.977 "data_offset": 0, 00:16:38.977 "data_size": 65536 00:16:38.977 } 00:16:38.977 ] 00:16:38.977 }' 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.977 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.544 [2024-11-20 09:29:04.809876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.544 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.545 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.545 "name": "Existed_Raid", 00:16:39.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.545 "strip_size_kb": 64, 00:16:39.545 "state": "configuring", 00:16:39.545 "raid_level": "raid5f", 00:16:39.545 "superblock": false, 00:16:39.545 "num_base_bdevs": 4, 00:16:39.545 "num_base_bdevs_discovered": 2, 00:16:39.545 "num_base_bdevs_operational": 4, 00:16:39.545 "base_bdevs_list": [ 00:16:39.545 { 00:16:39.545 "name": "BaseBdev1", 00:16:39.545 "uuid": "824c7f03-c8cf-4b22-b9ee-a5a7d8842508", 00:16:39.545 "is_configured": true, 00:16:39.545 "data_offset": 0, 00:16:39.545 "data_size": 65536 00:16:39.545 }, 00:16:39.545 { 00:16:39.545 "name": null, 00:16:39.545 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:39.545 "is_configured": false, 00:16:39.545 "data_offset": 0, 00:16:39.545 "data_size": 65536 00:16:39.545 }, 00:16:39.545 { 00:16:39.545 "name": null, 00:16:39.545 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:39.545 "is_configured": false, 00:16:39.545 "data_offset": 0, 00:16:39.545 "data_size": 65536 00:16:39.545 }, 00:16:39.545 { 00:16:39.545 "name": "BaseBdev4", 00:16:39.545 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:39.545 "is_configured": true, 00:16:39.545 "data_offset": 0, 00:16:39.545 "data_size": 65536 00:16:39.545 } 00:16:39.545 ] 00:16:39.545 }' 00:16:39.545 09:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.545 09:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.803 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.803 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.803 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.803 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 [2024-11-20 09:29:05.293077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.062 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.063 "name": "Existed_Raid", 00:16:40.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.063 "strip_size_kb": 64, 00:16:40.063 "state": "configuring", 00:16:40.063 "raid_level": "raid5f", 00:16:40.063 "superblock": false, 00:16:40.063 "num_base_bdevs": 4, 00:16:40.063 "num_base_bdevs_discovered": 3, 00:16:40.063 "num_base_bdevs_operational": 4, 00:16:40.063 "base_bdevs_list": [ 00:16:40.063 { 00:16:40.063 "name": "BaseBdev1", 00:16:40.063 "uuid": "824c7f03-c8cf-4b22-b9ee-a5a7d8842508", 00:16:40.063 "is_configured": true, 00:16:40.063 "data_offset": 0, 00:16:40.063 "data_size": 65536 00:16:40.063 }, 00:16:40.063 { 00:16:40.063 "name": null, 00:16:40.063 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:40.063 "is_configured": false, 00:16:40.063 "data_offset": 0, 00:16:40.063 "data_size": 65536 00:16:40.063 }, 00:16:40.063 { 00:16:40.063 "name": "BaseBdev3", 00:16:40.063 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:40.063 "is_configured": true, 00:16:40.063 "data_offset": 0, 00:16:40.063 "data_size": 65536 00:16:40.063 }, 00:16:40.063 { 00:16:40.063 "name": "BaseBdev4", 00:16:40.063 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:40.063 "is_configured": true, 00:16:40.063 "data_offset": 0, 00:16:40.063 "data_size": 65536 00:16:40.063 } 00:16:40.063 ] 00:16:40.063 }' 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.063 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.322 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.322 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:40.322 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.322 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.322 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.581 [2024-11-20 09:29:05.780291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.581 "name": "Existed_Raid", 00:16:40.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.581 "strip_size_kb": 64, 00:16:40.581 "state": "configuring", 00:16:40.581 "raid_level": "raid5f", 00:16:40.581 "superblock": false, 00:16:40.581 "num_base_bdevs": 4, 00:16:40.581 "num_base_bdevs_discovered": 2, 00:16:40.581 "num_base_bdevs_operational": 4, 00:16:40.581 "base_bdevs_list": [ 00:16:40.581 { 00:16:40.581 "name": null, 00:16:40.581 "uuid": "824c7f03-c8cf-4b22-b9ee-a5a7d8842508", 00:16:40.581 "is_configured": false, 00:16:40.581 "data_offset": 0, 00:16:40.581 "data_size": 65536 00:16:40.581 }, 00:16:40.581 { 00:16:40.581 "name": null, 00:16:40.581 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:40.581 "is_configured": false, 00:16:40.581 "data_offset": 0, 00:16:40.581 "data_size": 65536 00:16:40.581 }, 00:16:40.581 { 00:16:40.581 "name": "BaseBdev3", 00:16:40.581 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:40.581 "is_configured": true, 00:16:40.581 "data_offset": 0, 00:16:40.581 "data_size": 65536 00:16:40.581 }, 00:16:40.581 { 00:16:40.581 "name": "BaseBdev4", 00:16:40.581 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:40.581 "is_configured": true, 00:16:40.581 "data_offset": 0, 00:16:40.581 "data_size": 65536 00:16:40.581 } 00:16:40.581 ] 00:16:40.581 }' 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.581 09:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.150 [2024-11-20 09:29:06.370933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.150 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.151 "name": "Existed_Raid", 00:16:41.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.151 "strip_size_kb": 64, 00:16:41.151 "state": "configuring", 00:16:41.151 "raid_level": "raid5f", 00:16:41.151 "superblock": false, 00:16:41.151 "num_base_bdevs": 4, 00:16:41.151 "num_base_bdevs_discovered": 3, 00:16:41.151 "num_base_bdevs_operational": 4, 00:16:41.151 "base_bdevs_list": [ 00:16:41.151 { 00:16:41.151 "name": null, 00:16:41.151 "uuid": "824c7f03-c8cf-4b22-b9ee-a5a7d8842508", 00:16:41.151 "is_configured": false, 00:16:41.151 "data_offset": 0, 00:16:41.151 "data_size": 65536 00:16:41.151 }, 00:16:41.151 { 00:16:41.151 "name": "BaseBdev2", 00:16:41.151 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:41.151 "is_configured": true, 00:16:41.151 "data_offset": 0, 00:16:41.151 "data_size": 65536 00:16:41.151 }, 00:16:41.151 { 00:16:41.151 "name": "BaseBdev3", 00:16:41.151 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:41.151 "is_configured": true, 00:16:41.151 "data_offset": 0, 00:16:41.151 "data_size": 65536 00:16:41.151 }, 00:16:41.151 { 00:16:41.151 "name": "BaseBdev4", 00:16:41.151 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:41.151 "is_configured": true, 00:16:41.151 "data_offset": 0, 00:16:41.151 "data_size": 65536 00:16:41.151 } 00:16:41.151 ] 00:16:41.151 }' 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.151 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:41.410 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 824c7f03-c8cf-4b22-b9ee-a5a7d8842508 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.670 [2024-11-20 09:29:06.928692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:41.670 [2024-11-20 09:29:06.928757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:41.670 [2024-11-20 09:29:06.928764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:41.670 [2024-11-20 09:29:06.929039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:41.670 [2024-11-20 09:29:06.936583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:41.670 [2024-11-20 09:29:06.936655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:41.670 [2024-11-20 09:29:06.936942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.670 NewBaseBdev 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.670 [ 00:16:41.670 { 00:16:41.670 "name": "NewBaseBdev", 00:16:41.670 "aliases": [ 00:16:41.670 "824c7f03-c8cf-4b22-b9ee-a5a7d8842508" 00:16:41.670 ], 00:16:41.670 "product_name": "Malloc disk", 00:16:41.670 "block_size": 512, 00:16:41.670 "num_blocks": 65536, 00:16:41.670 "uuid": "824c7f03-c8cf-4b22-b9ee-a5a7d8842508", 00:16:41.670 "assigned_rate_limits": { 00:16:41.670 "rw_ios_per_sec": 0, 00:16:41.670 "rw_mbytes_per_sec": 0, 00:16:41.670 "r_mbytes_per_sec": 0, 00:16:41.670 "w_mbytes_per_sec": 0 00:16:41.670 }, 00:16:41.670 "claimed": true, 00:16:41.670 "claim_type": "exclusive_write", 00:16:41.670 "zoned": false, 00:16:41.670 "supported_io_types": { 00:16:41.670 "read": true, 00:16:41.670 "write": true, 00:16:41.670 "unmap": true, 00:16:41.670 "flush": true, 00:16:41.670 "reset": true, 00:16:41.670 "nvme_admin": false, 00:16:41.670 "nvme_io": false, 00:16:41.670 "nvme_io_md": false, 00:16:41.670 "write_zeroes": true, 00:16:41.670 "zcopy": true, 00:16:41.670 "get_zone_info": false, 00:16:41.670 "zone_management": false, 00:16:41.670 "zone_append": false, 00:16:41.670 "compare": false, 00:16:41.670 "compare_and_write": false, 00:16:41.670 "abort": true, 00:16:41.670 "seek_hole": false, 00:16:41.670 "seek_data": false, 00:16:41.670 "copy": true, 00:16:41.670 "nvme_iov_md": false 00:16:41.670 }, 00:16:41.670 "memory_domains": [ 00:16:41.670 { 00:16:41.670 "dma_device_id": "system", 00:16:41.670 "dma_device_type": 1 00:16:41.670 }, 00:16:41.670 { 00:16:41.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.670 "dma_device_type": 2 00:16:41.670 } 00:16:41.670 ], 00:16:41.670 "driver_specific": {} 00:16:41.670 } 00:16:41.670 ] 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.670 09:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.670 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.670 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.670 "name": "Existed_Raid", 00:16:41.670 "uuid": "223f4c4d-9cf3-47d2-81bf-9dd1d8471c4f", 00:16:41.670 "strip_size_kb": 64, 00:16:41.670 "state": "online", 00:16:41.670 "raid_level": "raid5f", 00:16:41.670 "superblock": false, 00:16:41.670 "num_base_bdevs": 4, 00:16:41.670 "num_base_bdevs_discovered": 4, 00:16:41.670 "num_base_bdevs_operational": 4, 00:16:41.670 "base_bdevs_list": [ 00:16:41.670 { 00:16:41.670 "name": "NewBaseBdev", 00:16:41.670 "uuid": "824c7f03-c8cf-4b22-b9ee-a5a7d8842508", 00:16:41.670 "is_configured": true, 00:16:41.670 "data_offset": 0, 00:16:41.670 "data_size": 65536 00:16:41.670 }, 00:16:41.670 { 00:16:41.670 "name": "BaseBdev2", 00:16:41.670 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:41.670 "is_configured": true, 00:16:41.670 "data_offset": 0, 00:16:41.670 "data_size": 65536 00:16:41.670 }, 00:16:41.670 { 00:16:41.670 "name": "BaseBdev3", 00:16:41.670 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:41.670 "is_configured": true, 00:16:41.670 "data_offset": 0, 00:16:41.670 "data_size": 65536 00:16:41.670 }, 00:16:41.670 { 00:16:41.670 "name": "BaseBdev4", 00:16:41.670 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:41.670 "is_configured": true, 00:16:41.670 "data_offset": 0, 00:16:41.670 "data_size": 65536 00:16:41.670 } 00:16:41.670 ] 00:16:41.670 }' 00:16:41.670 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.670 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:42.239 [2024-11-20 09:29:07.405414] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:42.239 "name": "Existed_Raid", 00:16:42.239 "aliases": [ 00:16:42.239 "223f4c4d-9cf3-47d2-81bf-9dd1d8471c4f" 00:16:42.239 ], 00:16:42.239 "product_name": "Raid Volume", 00:16:42.239 "block_size": 512, 00:16:42.239 "num_blocks": 196608, 00:16:42.239 "uuid": "223f4c4d-9cf3-47d2-81bf-9dd1d8471c4f", 00:16:42.239 "assigned_rate_limits": { 00:16:42.239 "rw_ios_per_sec": 0, 00:16:42.239 "rw_mbytes_per_sec": 0, 00:16:42.239 "r_mbytes_per_sec": 0, 00:16:42.239 "w_mbytes_per_sec": 0 00:16:42.239 }, 00:16:42.239 "claimed": false, 00:16:42.239 "zoned": false, 00:16:42.239 "supported_io_types": { 00:16:42.239 "read": true, 00:16:42.239 "write": true, 00:16:42.239 "unmap": false, 00:16:42.239 "flush": false, 00:16:42.239 "reset": true, 00:16:42.239 "nvme_admin": false, 00:16:42.239 "nvme_io": false, 00:16:42.239 "nvme_io_md": false, 00:16:42.239 "write_zeroes": true, 00:16:42.239 "zcopy": false, 00:16:42.239 "get_zone_info": false, 00:16:42.239 "zone_management": false, 00:16:42.239 "zone_append": false, 00:16:42.239 "compare": false, 00:16:42.239 "compare_and_write": false, 00:16:42.239 "abort": false, 00:16:42.239 "seek_hole": false, 00:16:42.239 "seek_data": false, 00:16:42.239 "copy": false, 00:16:42.239 "nvme_iov_md": false 00:16:42.239 }, 00:16:42.239 "driver_specific": { 00:16:42.239 "raid": { 00:16:42.239 "uuid": "223f4c4d-9cf3-47d2-81bf-9dd1d8471c4f", 00:16:42.239 "strip_size_kb": 64, 00:16:42.239 "state": "online", 00:16:42.239 "raid_level": "raid5f", 00:16:42.239 "superblock": false, 00:16:42.239 "num_base_bdevs": 4, 00:16:42.239 "num_base_bdevs_discovered": 4, 00:16:42.239 "num_base_bdevs_operational": 4, 00:16:42.239 "base_bdevs_list": [ 00:16:42.239 { 00:16:42.239 "name": "NewBaseBdev", 00:16:42.239 "uuid": "824c7f03-c8cf-4b22-b9ee-a5a7d8842508", 00:16:42.239 "is_configured": true, 00:16:42.239 "data_offset": 0, 00:16:42.239 "data_size": 65536 00:16:42.239 }, 00:16:42.239 { 00:16:42.239 "name": "BaseBdev2", 00:16:42.239 "uuid": "bfe5900b-0d07-4107-9d8f-75a38518b1d0", 00:16:42.239 "is_configured": true, 00:16:42.239 "data_offset": 0, 00:16:42.239 "data_size": 65536 00:16:42.239 }, 00:16:42.239 { 00:16:42.239 "name": "BaseBdev3", 00:16:42.239 "uuid": "ac4fe083-13cc-4580-8a55-0a8f5acfd108", 00:16:42.239 "is_configured": true, 00:16:42.239 "data_offset": 0, 00:16:42.239 "data_size": 65536 00:16:42.239 }, 00:16:42.239 { 00:16:42.239 "name": "BaseBdev4", 00:16:42.239 "uuid": "eebb31b9-510c-4ca4-a36a-e47e5d47b524", 00:16:42.239 "is_configured": true, 00:16:42.239 "data_offset": 0, 00:16:42.239 "data_size": 65536 00:16:42.239 } 00:16:42.239 ] 00:16:42.239 } 00:16:42.239 } 00:16:42.239 }' 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:42.239 BaseBdev2 00:16:42.239 BaseBdev3 00:16:42.239 BaseBdev4' 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.239 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.240 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.499 [2024-11-20 09:29:07.720637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.499 [2024-11-20 09:29:07.720671] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.499 [2024-11-20 09:29:07.720764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.499 [2024-11-20 09:29:07.721083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.499 [2024-11-20 09:29:07.721096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83188 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83188 ']' 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83188 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83188 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:42.499 killing process with pid 83188 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83188' 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83188 00:16:42.499 [2024-11-20 09:29:07.771500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.499 09:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83188 00:16:42.757 [2024-11-20 09:29:08.196388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:44.167 00:16:44.167 real 0m11.799s 00:16:44.167 user 0m18.637s 00:16:44.167 sys 0m2.066s 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.167 ************************************ 00:16:44.167 END TEST raid5f_state_function_test 00:16:44.167 ************************************ 00:16:44.167 09:29:09 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:44.167 09:29:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:44.167 09:29:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.167 09:29:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.167 ************************************ 00:16:44.167 START TEST raid5f_state_function_test_sb 00:16:44.167 ************************************ 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83860 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83860' 00:16:44.167 Process raid pid: 83860 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83860 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83860 ']' 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.167 09:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.167 [2024-11-20 09:29:09.522981] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:16:44.167 [2024-11-20 09:29:09.523185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.426 [2024-11-20 09:29:09.699134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.426 [2024-11-20 09:29:09.849895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.685 [2024-11-20 09:29:10.079785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.685 [2024-11-20 09:29:10.079921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.253 [2024-11-20 09:29:10.434418] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.253 [2024-11-20 09:29:10.434578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.253 [2024-11-20 09:29:10.434617] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.253 [2024-11-20 09:29:10.434658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.253 [2024-11-20 09:29:10.434689] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.253 [2024-11-20 09:29:10.434715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.253 [2024-11-20 09:29:10.434756] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:45.253 [2024-11-20 09:29:10.434782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.253 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.253 "name": "Existed_Raid", 00:16:45.253 "uuid": "ace8d0ac-91ed-4ad1-a64f-b86c30e81400", 00:16:45.253 "strip_size_kb": 64, 00:16:45.253 "state": "configuring", 00:16:45.253 "raid_level": "raid5f", 00:16:45.253 "superblock": true, 00:16:45.253 "num_base_bdevs": 4, 00:16:45.253 "num_base_bdevs_discovered": 0, 00:16:45.253 "num_base_bdevs_operational": 4, 00:16:45.253 "base_bdevs_list": [ 00:16:45.253 { 00:16:45.253 "name": "BaseBdev1", 00:16:45.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.253 "is_configured": false, 00:16:45.253 "data_offset": 0, 00:16:45.253 "data_size": 0 00:16:45.253 }, 00:16:45.253 { 00:16:45.253 "name": "BaseBdev2", 00:16:45.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.253 "is_configured": false, 00:16:45.253 "data_offset": 0, 00:16:45.253 "data_size": 0 00:16:45.253 }, 00:16:45.253 { 00:16:45.253 "name": "BaseBdev3", 00:16:45.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.253 "is_configured": false, 00:16:45.253 "data_offset": 0, 00:16:45.253 "data_size": 0 00:16:45.253 }, 00:16:45.253 { 00:16:45.253 "name": "BaseBdev4", 00:16:45.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.253 "is_configured": false, 00:16:45.254 "data_offset": 0, 00:16:45.254 "data_size": 0 00:16:45.254 } 00:16:45.254 ] 00:16:45.254 }' 00:16:45.254 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.254 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.513 [2024-11-20 09:29:10.873638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.513 [2024-11-20 09:29:10.873695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.513 [2024-11-20 09:29:10.885620] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.513 [2024-11-20 09:29:10.885678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.513 [2024-11-20 09:29:10.885689] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.513 [2024-11-20 09:29:10.885699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.513 [2024-11-20 09:29:10.885707] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.513 [2024-11-20 09:29:10.885717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.513 [2024-11-20 09:29:10.885724] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:45.513 [2024-11-20 09:29:10.885734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.513 [2024-11-20 09:29:10.938084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.513 BaseBdev1 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.513 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.772 [ 00:16:45.772 { 00:16:45.772 "name": "BaseBdev1", 00:16:45.772 "aliases": [ 00:16:45.772 "a14b527a-48ed-40e0-ad09-073a438f562c" 00:16:45.772 ], 00:16:45.772 "product_name": "Malloc disk", 00:16:45.772 "block_size": 512, 00:16:45.772 "num_blocks": 65536, 00:16:45.772 "uuid": "a14b527a-48ed-40e0-ad09-073a438f562c", 00:16:45.772 "assigned_rate_limits": { 00:16:45.772 "rw_ios_per_sec": 0, 00:16:45.772 "rw_mbytes_per_sec": 0, 00:16:45.772 "r_mbytes_per_sec": 0, 00:16:45.772 "w_mbytes_per_sec": 0 00:16:45.772 }, 00:16:45.772 "claimed": true, 00:16:45.772 "claim_type": "exclusive_write", 00:16:45.772 "zoned": false, 00:16:45.772 "supported_io_types": { 00:16:45.772 "read": true, 00:16:45.772 "write": true, 00:16:45.772 "unmap": true, 00:16:45.772 "flush": true, 00:16:45.772 "reset": true, 00:16:45.772 "nvme_admin": false, 00:16:45.772 "nvme_io": false, 00:16:45.773 "nvme_io_md": false, 00:16:45.773 "write_zeroes": true, 00:16:45.773 "zcopy": true, 00:16:45.773 "get_zone_info": false, 00:16:45.773 "zone_management": false, 00:16:45.773 "zone_append": false, 00:16:45.773 "compare": false, 00:16:45.773 "compare_and_write": false, 00:16:45.773 "abort": true, 00:16:45.773 "seek_hole": false, 00:16:45.773 "seek_data": false, 00:16:45.773 "copy": true, 00:16:45.773 "nvme_iov_md": false 00:16:45.773 }, 00:16:45.773 "memory_domains": [ 00:16:45.773 { 00:16:45.773 "dma_device_id": "system", 00:16:45.773 "dma_device_type": 1 00:16:45.773 }, 00:16:45.773 { 00:16:45.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.773 "dma_device_type": 2 00:16:45.773 } 00:16:45.773 ], 00:16:45.773 "driver_specific": {} 00:16:45.773 } 00:16:45.773 ] 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.773 09:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.773 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.773 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.773 "name": "Existed_Raid", 00:16:45.773 "uuid": "51c6a143-038c-40ff-97c4-408460dd6631", 00:16:45.773 "strip_size_kb": 64, 00:16:45.773 "state": "configuring", 00:16:45.773 "raid_level": "raid5f", 00:16:45.773 "superblock": true, 00:16:45.773 "num_base_bdevs": 4, 00:16:45.773 "num_base_bdevs_discovered": 1, 00:16:45.773 "num_base_bdevs_operational": 4, 00:16:45.773 "base_bdevs_list": [ 00:16:45.773 { 00:16:45.773 "name": "BaseBdev1", 00:16:45.773 "uuid": "a14b527a-48ed-40e0-ad09-073a438f562c", 00:16:45.773 "is_configured": true, 00:16:45.773 "data_offset": 2048, 00:16:45.773 "data_size": 63488 00:16:45.773 }, 00:16:45.773 { 00:16:45.773 "name": "BaseBdev2", 00:16:45.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.773 "is_configured": false, 00:16:45.773 "data_offset": 0, 00:16:45.773 "data_size": 0 00:16:45.773 }, 00:16:45.773 { 00:16:45.773 "name": "BaseBdev3", 00:16:45.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.773 "is_configured": false, 00:16:45.773 "data_offset": 0, 00:16:45.773 "data_size": 0 00:16:45.773 }, 00:16:45.773 { 00:16:45.773 "name": "BaseBdev4", 00:16:45.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.773 "is_configured": false, 00:16:45.773 "data_offset": 0, 00:16:45.773 "data_size": 0 00:16:45.773 } 00:16:45.773 ] 00:16:45.773 }' 00:16:45.773 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.773 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.032 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.032 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.032 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.032 [2024-11-20 09:29:11.469275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.032 [2024-11-20 09:29:11.469356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:46.032 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.033 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:46.033 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.033 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.033 [2024-11-20 09:29:11.481310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.033 [2024-11-20 09:29:11.483214] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.033 [2024-11-20 09:29:11.483261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.033 [2024-11-20 09:29:11.483271] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.033 [2024-11-20 09:29:11.483298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.033 [2024-11-20 09:29:11.483306] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:46.033 [2024-11-20 09:29:11.483315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.292 "name": "Existed_Raid", 00:16:46.292 "uuid": "f46bc516-3fb3-4732-97b4-eea2aa9a7080", 00:16:46.292 "strip_size_kb": 64, 00:16:46.292 "state": "configuring", 00:16:46.292 "raid_level": "raid5f", 00:16:46.292 "superblock": true, 00:16:46.292 "num_base_bdevs": 4, 00:16:46.292 "num_base_bdevs_discovered": 1, 00:16:46.292 "num_base_bdevs_operational": 4, 00:16:46.292 "base_bdevs_list": [ 00:16:46.292 { 00:16:46.292 "name": "BaseBdev1", 00:16:46.292 "uuid": "a14b527a-48ed-40e0-ad09-073a438f562c", 00:16:46.292 "is_configured": true, 00:16:46.292 "data_offset": 2048, 00:16:46.292 "data_size": 63488 00:16:46.292 }, 00:16:46.292 { 00:16:46.292 "name": "BaseBdev2", 00:16:46.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.292 "is_configured": false, 00:16:46.292 "data_offset": 0, 00:16:46.292 "data_size": 0 00:16:46.292 }, 00:16:46.292 { 00:16:46.292 "name": "BaseBdev3", 00:16:46.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.292 "is_configured": false, 00:16:46.292 "data_offset": 0, 00:16:46.292 "data_size": 0 00:16:46.292 }, 00:16:46.292 { 00:16:46.292 "name": "BaseBdev4", 00:16:46.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.292 "is_configured": false, 00:16:46.292 "data_offset": 0, 00:16:46.292 "data_size": 0 00:16:46.292 } 00:16:46.292 ] 00:16:46.292 }' 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.292 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.551 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.552 [2024-11-20 09:29:11.940142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.552 BaseBdev2 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.552 [ 00:16:46.552 { 00:16:46.552 "name": "BaseBdev2", 00:16:46.552 "aliases": [ 00:16:46.552 "ae686d02-1341-4cea-8017-f905b79caccd" 00:16:46.552 ], 00:16:46.552 "product_name": "Malloc disk", 00:16:46.552 "block_size": 512, 00:16:46.552 "num_blocks": 65536, 00:16:46.552 "uuid": "ae686d02-1341-4cea-8017-f905b79caccd", 00:16:46.552 "assigned_rate_limits": { 00:16:46.552 "rw_ios_per_sec": 0, 00:16:46.552 "rw_mbytes_per_sec": 0, 00:16:46.552 "r_mbytes_per_sec": 0, 00:16:46.552 "w_mbytes_per_sec": 0 00:16:46.552 }, 00:16:46.552 "claimed": true, 00:16:46.552 "claim_type": "exclusive_write", 00:16:46.552 "zoned": false, 00:16:46.552 "supported_io_types": { 00:16:46.552 "read": true, 00:16:46.552 "write": true, 00:16:46.552 "unmap": true, 00:16:46.552 "flush": true, 00:16:46.552 "reset": true, 00:16:46.552 "nvme_admin": false, 00:16:46.552 "nvme_io": false, 00:16:46.552 "nvme_io_md": false, 00:16:46.552 "write_zeroes": true, 00:16:46.552 "zcopy": true, 00:16:46.552 "get_zone_info": false, 00:16:46.552 "zone_management": false, 00:16:46.552 "zone_append": false, 00:16:46.552 "compare": false, 00:16:46.552 "compare_and_write": false, 00:16:46.552 "abort": true, 00:16:46.552 "seek_hole": false, 00:16:46.552 "seek_data": false, 00:16:46.552 "copy": true, 00:16:46.552 "nvme_iov_md": false 00:16:46.552 }, 00:16:46.552 "memory_domains": [ 00:16:46.552 { 00:16:46.552 "dma_device_id": "system", 00:16:46.552 "dma_device_type": 1 00:16:46.552 }, 00:16:46.552 { 00:16:46.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.552 "dma_device_type": 2 00:16:46.552 } 00:16:46.552 ], 00:16:46.552 "driver_specific": {} 00:16:46.552 } 00:16:46.552 ] 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.552 09:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.811 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.811 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.811 "name": "Existed_Raid", 00:16:46.811 "uuid": "f46bc516-3fb3-4732-97b4-eea2aa9a7080", 00:16:46.811 "strip_size_kb": 64, 00:16:46.811 "state": "configuring", 00:16:46.811 "raid_level": "raid5f", 00:16:46.811 "superblock": true, 00:16:46.811 "num_base_bdevs": 4, 00:16:46.811 "num_base_bdevs_discovered": 2, 00:16:46.811 "num_base_bdevs_operational": 4, 00:16:46.811 "base_bdevs_list": [ 00:16:46.811 { 00:16:46.811 "name": "BaseBdev1", 00:16:46.811 "uuid": "a14b527a-48ed-40e0-ad09-073a438f562c", 00:16:46.811 "is_configured": true, 00:16:46.811 "data_offset": 2048, 00:16:46.811 "data_size": 63488 00:16:46.811 }, 00:16:46.811 { 00:16:46.811 "name": "BaseBdev2", 00:16:46.811 "uuid": "ae686d02-1341-4cea-8017-f905b79caccd", 00:16:46.811 "is_configured": true, 00:16:46.811 "data_offset": 2048, 00:16:46.811 "data_size": 63488 00:16:46.811 }, 00:16:46.811 { 00:16:46.811 "name": "BaseBdev3", 00:16:46.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.811 "is_configured": false, 00:16:46.811 "data_offset": 0, 00:16:46.811 "data_size": 0 00:16:46.811 }, 00:16:46.811 { 00:16:46.811 "name": "BaseBdev4", 00:16:46.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.811 "is_configured": false, 00:16:46.811 "data_offset": 0, 00:16:46.811 "data_size": 0 00:16:46.811 } 00:16:46.811 ] 00:16:46.811 }' 00:16:46.811 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.811 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.071 [2024-11-20 09:29:12.515096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.071 BaseBdev3 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.071 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.331 [ 00:16:47.331 { 00:16:47.331 "name": "BaseBdev3", 00:16:47.331 "aliases": [ 00:16:47.331 "e6579f92-9966-43b4-96c5-5196fe5ea4b9" 00:16:47.331 ], 00:16:47.331 "product_name": "Malloc disk", 00:16:47.331 "block_size": 512, 00:16:47.331 "num_blocks": 65536, 00:16:47.331 "uuid": "e6579f92-9966-43b4-96c5-5196fe5ea4b9", 00:16:47.331 "assigned_rate_limits": { 00:16:47.331 "rw_ios_per_sec": 0, 00:16:47.331 "rw_mbytes_per_sec": 0, 00:16:47.331 "r_mbytes_per_sec": 0, 00:16:47.331 "w_mbytes_per_sec": 0 00:16:47.331 }, 00:16:47.331 "claimed": true, 00:16:47.331 "claim_type": "exclusive_write", 00:16:47.331 "zoned": false, 00:16:47.331 "supported_io_types": { 00:16:47.331 "read": true, 00:16:47.331 "write": true, 00:16:47.331 "unmap": true, 00:16:47.331 "flush": true, 00:16:47.331 "reset": true, 00:16:47.331 "nvme_admin": false, 00:16:47.331 "nvme_io": false, 00:16:47.331 "nvme_io_md": false, 00:16:47.331 "write_zeroes": true, 00:16:47.331 "zcopy": true, 00:16:47.331 "get_zone_info": false, 00:16:47.331 "zone_management": false, 00:16:47.331 "zone_append": false, 00:16:47.331 "compare": false, 00:16:47.331 "compare_and_write": false, 00:16:47.331 "abort": true, 00:16:47.331 "seek_hole": false, 00:16:47.331 "seek_data": false, 00:16:47.331 "copy": true, 00:16:47.331 "nvme_iov_md": false 00:16:47.331 }, 00:16:47.331 "memory_domains": [ 00:16:47.331 { 00:16:47.331 "dma_device_id": "system", 00:16:47.331 "dma_device_type": 1 00:16:47.331 }, 00:16:47.331 { 00:16:47.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.331 "dma_device_type": 2 00:16:47.331 } 00:16:47.331 ], 00:16:47.331 "driver_specific": {} 00:16:47.331 } 00:16:47.331 ] 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.331 "name": "Existed_Raid", 00:16:47.331 "uuid": "f46bc516-3fb3-4732-97b4-eea2aa9a7080", 00:16:47.331 "strip_size_kb": 64, 00:16:47.331 "state": "configuring", 00:16:47.331 "raid_level": "raid5f", 00:16:47.331 "superblock": true, 00:16:47.331 "num_base_bdevs": 4, 00:16:47.331 "num_base_bdevs_discovered": 3, 00:16:47.331 "num_base_bdevs_operational": 4, 00:16:47.331 "base_bdevs_list": [ 00:16:47.331 { 00:16:47.331 "name": "BaseBdev1", 00:16:47.331 "uuid": "a14b527a-48ed-40e0-ad09-073a438f562c", 00:16:47.331 "is_configured": true, 00:16:47.331 "data_offset": 2048, 00:16:47.331 "data_size": 63488 00:16:47.331 }, 00:16:47.331 { 00:16:47.331 "name": "BaseBdev2", 00:16:47.331 "uuid": "ae686d02-1341-4cea-8017-f905b79caccd", 00:16:47.331 "is_configured": true, 00:16:47.331 "data_offset": 2048, 00:16:47.331 "data_size": 63488 00:16:47.331 }, 00:16:47.331 { 00:16:47.331 "name": "BaseBdev3", 00:16:47.331 "uuid": "e6579f92-9966-43b4-96c5-5196fe5ea4b9", 00:16:47.331 "is_configured": true, 00:16:47.331 "data_offset": 2048, 00:16:47.331 "data_size": 63488 00:16:47.331 }, 00:16:47.331 { 00:16:47.331 "name": "BaseBdev4", 00:16:47.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.331 "is_configured": false, 00:16:47.331 "data_offset": 0, 00:16:47.331 "data_size": 0 00:16:47.331 } 00:16:47.331 ] 00:16:47.331 }' 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.331 09:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.590 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:47.590 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.590 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.849 [2024-11-20 09:29:13.107525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:47.849 [2024-11-20 09:29:13.107920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:47.849 [2024-11-20 09:29:13.107960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:47.849 [2024-11-20 09:29:13.108297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:47.849 BaseBdev4 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.849 [2024-11-20 09:29:13.117479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:47.849 [2024-11-20 09:29:13.117577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:47.849 [2024-11-20 09:29:13.118011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.849 [ 00:16:47.849 { 00:16:47.849 "name": "BaseBdev4", 00:16:47.849 "aliases": [ 00:16:47.849 "ff91325a-9246-478b-aef0-ad27672dcc40" 00:16:47.849 ], 00:16:47.849 "product_name": "Malloc disk", 00:16:47.849 "block_size": 512, 00:16:47.849 "num_blocks": 65536, 00:16:47.849 "uuid": "ff91325a-9246-478b-aef0-ad27672dcc40", 00:16:47.849 "assigned_rate_limits": { 00:16:47.849 "rw_ios_per_sec": 0, 00:16:47.849 "rw_mbytes_per_sec": 0, 00:16:47.849 "r_mbytes_per_sec": 0, 00:16:47.849 "w_mbytes_per_sec": 0 00:16:47.849 }, 00:16:47.849 "claimed": true, 00:16:47.849 "claim_type": "exclusive_write", 00:16:47.849 "zoned": false, 00:16:47.849 "supported_io_types": { 00:16:47.849 "read": true, 00:16:47.849 "write": true, 00:16:47.849 "unmap": true, 00:16:47.849 "flush": true, 00:16:47.849 "reset": true, 00:16:47.849 "nvme_admin": false, 00:16:47.849 "nvme_io": false, 00:16:47.849 "nvme_io_md": false, 00:16:47.849 "write_zeroes": true, 00:16:47.849 "zcopy": true, 00:16:47.849 "get_zone_info": false, 00:16:47.849 "zone_management": false, 00:16:47.849 "zone_append": false, 00:16:47.849 "compare": false, 00:16:47.849 "compare_and_write": false, 00:16:47.849 "abort": true, 00:16:47.849 "seek_hole": false, 00:16:47.849 "seek_data": false, 00:16:47.849 "copy": true, 00:16:47.849 "nvme_iov_md": false 00:16:47.849 }, 00:16:47.849 "memory_domains": [ 00:16:47.849 { 00:16:47.849 "dma_device_id": "system", 00:16:47.849 "dma_device_type": 1 00:16:47.849 }, 00:16:47.849 { 00:16:47.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.849 "dma_device_type": 2 00:16:47.849 } 00:16:47.849 ], 00:16:47.849 "driver_specific": {} 00:16:47.849 } 00:16:47.849 ] 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.849 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.850 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.850 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.850 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.850 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.850 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.850 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.850 "name": "Existed_Raid", 00:16:47.850 "uuid": "f46bc516-3fb3-4732-97b4-eea2aa9a7080", 00:16:47.850 "strip_size_kb": 64, 00:16:47.850 "state": "online", 00:16:47.850 "raid_level": "raid5f", 00:16:47.850 "superblock": true, 00:16:47.850 "num_base_bdevs": 4, 00:16:47.850 "num_base_bdevs_discovered": 4, 00:16:47.850 "num_base_bdevs_operational": 4, 00:16:47.850 "base_bdevs_list": [ 00:16:47.850 { 00:16:47.850 "name": "BaseBdev1", 00:16:47.850 "uuid": "a14b527a-48ed-40e0-ad09-073a438f562c", 00:16:47.850 "is_configured": true, 00:16:47.850 "data_offset": 2048, 00:16:47.850 "data_size": 63488 00:16:47.850 }, 00:16:47.850 { 00:16:47.850 "name": "BaseBdev2", 00:16:47.850 "uuid": "ae686d02-1341-4cea-8017-f905b79caccd", 00:16:47.850 "is_configured": true, 00:16:47.850 "data_offset": 2048, 00:16:47.850 "data_size": 63488 00:16:47.850 }, 00:16:47.850 { 00:16:47.850 "name": "BaseBdev3", 00:16:47.850 "uuid": "e6579f92-9966-43b4-96c5-5196fe5ea4b9", 00:16:47.850 "is_configured": true, 00:16:47.850 "data_offset": 2048, 00:16:47.850 "data_size": 63488 00:16:47.850 }, 00:16:47.850 { 00:16:47.850 "name": "BaseBdev4", 00:16:47.850 "uuid": "ff91325a-9246-478b-aef0-ad27672dcc40", 00:16:47.850 "is_configured": true, 00:16:47.850 "data_offset": 2048, 00:16:47.850 "data_size": 63488 00:16:47.850 } 00:16:47.850 ] 00:16:47.850 }' 00:16:47.850 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.850 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:48.417 [2024-11-20 09:29:13.633000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.417 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:48.417 "name": "Existed_Raid", 00:16:48.417 "aliases": [ 00:16:48.417 "f46bc516-3fb3-4732-97b4-eea2aa9a7080" 00:16:48.417 ], 00:16:48.417 "product_name": "Raid Volume", 00:16:48.417 "block_size": 512, 00:16:48.418 "num_blocks": 190464, 00:16:48.418 "uuid": "f46bc516-3fb3-4732-97b4-eea2aa9a7080", 00:16:48.418 "assigned_rate_limits": { 00:16:48.418 "rw_ios_per_sec": 0, 00:16:48.418 "rw_mbytes_per_sec": 0, 00:16:48.418 "r_mbytes_per_sec": 0, 00:16:48.418 "w_mbytes_per_sec": 0 00:16:48.418 }, 00:16:48.418 "claimed": false, 00:16:48.418 "zoned": false, 00:16:48.418 "supported_io_types": { 00:16:48.418 "read": true, 00:16:48.418 "write": true, 00:16:48.418 "unmap": false, 00:16:48.418 "flush": false, 00:16:48.418 "reset": true, 00:16:48.418 "nvme_admin": false, 00:16:48.418 "nvme_io": false, 00:16:48.418 "nvme_io_md": false, 00:16:48.418 "write_zeroes": true, 00:16:48.418 "zcopy": false, 00:16:48.418 "get_zone_info": false, 00:16:48.418 "zone_management": false, 00:16:48.418 "zone_append": false, 00:16:48.418 "compare": false, 00:16:48.418 "compare_and_write": false, 00:16:48.418 "abort": false, 00:16:48.418 "seek_hole": false, 00:16:48.418 "seek_data": false, 00:16:48.418 "copy": false, 00:16:48.418 "nvme_iov_md": false 00:16:48.418 }, 00:16:48.418 "driver_specific": { 00:16:48.418 "raid": { 00:16:48.418 "uuid": "f46bc516-3fb3-4732-97b4-eea2aa9a7080", 00:16:48.418 "strip_size_kb": 64, 00:16:48.418 "state": "online", 00:16:48.418 "raid_level": "raid5f", 00:16:48.418 "superblock": true, 00:16:48.418 "num_base_bdevs": 4, 00:16:48.418 "num_base_bdevs_discovered": 4, 00:16:48.418 "num_base_bdevs_operational": 4, 00:16:48.418 "base_bdevs_list": [ 00:16:48.418 { 00:16:48.418 "name": "BaseBdev1", 00:16:48.418 "uuid": "a14b527a-48ed-40e0-ad09-073a438f562c", 00:16:48.418 "is_configured": true, 00:16:48.418 "data_offset": 2048, 00:16:48.418 "data_size": 63488 00:16:48.418 }, 00:16:48.418 { 00:16:48.418 "name": "BaseBdev2", 00:16:48.418 "uuid": "ae686d02-1341-4cea-8017-f905b79caccd", 00:16:48.418 "is_configured": true, 00:16:48.418 "data_offset": 2048, 00:16:48.418 "data_size": 63488 00:16:48.418 }, 00:16:48.418 { 00:16:48.418 "name": "BaseBdev3", 00:16:48.418 "uuid": "e6579f92-9966-43b4-96c5-5196fe5ea4b9", 00:16:48.418 "is_configured": true, 00:16:48.418 "data_offset": 2048, 00:16:48.418 "data_size": 63488 00:16:48.418 }, 00:16:48.418 { 00:16:48.418 "name": "BaseBdev4", 00:16:48.418 "uuid": "ff91325a-9246-478b-aef0-ad27672dcc40", 00:16:48.418 "is_configured": true, 00:16:48.418 "data_offset": 2048, 00:16:48.418 "data_size": 63488 00:16:48.418 } 00:16:48.418 ] 00:16:48.418 } 00:16:48.418 } 00:16:48.418 }' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:48.418 BaseBdev2 00:16:48.418 BaseBdev3 00:16:48.418 BaseBdev4' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.418 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.678 09:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.678 [2024-11-20 09:29:13.928619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.678 "name": "Existed_Raid", 00:16:48.678 "uuid": "f46bc516-3fb3-4732-97b4-eea2aa9a7080", 00:16:48.678 "strip_size_kb": 64, 00:16:48.678 "state": "online", 00:16:48.678 "raid_level": "raid5f", 00:16:48.678 "superblock": true, 00:16:48.678 "num_base_bdevs": 4, 00:16:48.678 "num_base_bdevs_discovered": 3, 00:16:48.678 "num_base_bdevs_operational": 3, 00:16:48.678 "base_bdevs_list": [ 00:16:48.678 { 00:16:48.678 "name": null, 00:16:48.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.678 "is_configured": false, 00:16:48.678 "data_offset": 0, 00:16:48.678 "data_size": 63488 00:16:48.678 }, 00:16:48.678 { 00:16:48.678 "name": "BaseBdev2", 00:16:48.678 "uuid": "ae686d02-1341-4cea-8017-f905b79caccd", 00:16:48.678 "is_configured": true, 00:16:48.678 "data_offset": 2048, 00:16:48.678 "data_size": 63488 00:16:48.678 }, 00:16:48.678 { 00:16:48.678 "name": "BaseBdev3", 00:16:48.678 "uuid": "e6579f92-9966-43b4-96c5-5196fe5ea4b9", 00:16:48.678 "is_configured": true, 00:16:48.678 "data_offset": 2048, 00:16:48.678 "data_size": 63488 00:16:48.678 }, 00:16:48.678 { 00:16:48.678 "name": "BaseBdev4", 00:16:48.678 "uuid": "ff91325a-9246-478b-aef0-ad27672dcc40", 00:16:48.678 "is_configured": true, 00:16:48.678 "data_offset": 2048, 00:16:48.678 "data_size": 63488 00:16:48.678 } 00:16:48.678 ] 00:16:48.678 }' 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.678 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.246 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:49.246 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.246 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.246 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.247 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.247 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.247 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.247 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:49.247 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.247 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:49.247 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.247 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.247 [2024-11-20 09:29:14.587581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.247 [2024-11-20 09:29:14.587899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.505 [2024-11-20 09:29:14.728352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.505 [2024-11-20 09:29:14.788356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.505 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.764 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:49.764 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:49.764 09:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:49.764 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.764 09:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.764 [2024-11-20 09:29:14.979551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:49.764 [2024-11-20 09:29:14.979751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.764 BaseBdev2 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.764 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.023 [ 00:16:50.023 { 00:16:50.023 "name": "BaseBdev2", 00:16:50.023 "aliases": [ 00:16:50.023 "f87e6e91-9e48-4bb0-a98f-529ed8a849d6" 00:16:50.023 ], 00:16:50.023 "product_name": "Malloc disk", 00:16:50.023 "block_size": 512, 00:16:50.023 "num_blocks": 65536, 00:16:50.023 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:50.023 "assigned_rate_limits": { 00:16:50.023 "rw_ios_per_sec": 0, 00:16:50.023 "rw_mbytes_per_sec": 0, 00:16:50.023 "r_mbytes_per_sec": 0, 00:16:50.023 "w_mbytes_per_sec": 0 00:16:50.023 }, 00:16:50.023 "claimed": false, 00:16:50.023 "zoned": false, 00:16:50.023 "supported_io_types": { 00:16:50.023 "read": true, 00:16:50.023 "write": true, 00:16:50.023 "unmap": true, 00:16:50.023 "flush": true, 00:16:50.023 "reset": true, 00:16:50.023 "nvme_admin": false, 00:16:50.023 "nvme_io": false, 00:16:50.023 "nvme_io_md": false, 00:16:50.023 "write_zeroes": true, 00:16:50.023 "zcopy": true, 00:16:50.023 "get_zone_info": false, 00:16:50.023 "zone_management": false, 00:16:50.023 "zone_append": false, 00:16:50.023 "compare": false, 00:16:50.023 "compare_and_write": false, 00:16:50.023 "abort": true, 00:16:50.023 "seek_hole": false, 00:16:50.023 "seek_data": false, 00:16:50.023 "copy": true, 00:16:50.023 "nvme_iov_md": false 00:16:50.023 }, 00:16:50.023 "memory_domains": [ 00:16:50.023 { 00:16:50.023 "dma_device_id": "system", 00:16:50.023 "dma_device_type": 1 00:16:50.023 }, 00:16:50.023 { 00:16:50.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.023 "dma_device_type": 2 00:16:50.023 } 00:16:50.023 ], 00:16:50.023 "driver_specific": {} 00:16:50.023 } 00:16:50.023 ] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.023 BaseBdev3 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.023 [ 00:16:50.023 { 00:16:50.023 "name": "BaseBdev3", 00:16:50.023 "aliases": [ 00:16:50.023 "f9797cff-1ec1-4f95-a487-d87a0ecdad55" 00:16:50.023 ], 00:16:50.023 "product_name": "Malloc disk", 00:16:50.023 "block_size": 512, 00:16:50.023 "num_blocks": 65536, 00:16:50.023 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:50.023 "assigned_rate_limits": { 00:16:50.023 "rw_ios_per_sec": 0, 00:16:50.023 "rw_mbytes_per_sec": 0, 00:16:50.023 "r_mbytes_per_sec": 0, 00:16:50.023 "w_mbytes_per_sec": 0 00:16:50.023 }, 00:16:50.023 "claimed": false, 00:16:50.023 "zoned": false, 00:16:50.023 "supported_io_types": { 00:16:50.023 "read": true, 00:16:50.023 "write": true, 00:16:50.023 "unmap": true, 00:16:50.023 "flush": true, 00:16:50.023 "reset": true, 00:16:50.023 "nvme_admin": false, 00:16:50.023 "nvme_io": false, 00:16:50.023 "nvme_io_md": false, 00:16:50.023 "write_zeroes": true, 00:16:50.023 "zcopy": true, 00:16:50.023 "get_zone_info": false, 00:16:50.023 "zone_management": false, 00:16:50.023 "zone_append": false, 00:16:50.023 "compare": false, 00:16:50.023 "compare_and_write": false, 00:16:50.023 "abort": true, 00:16:50.023 "seek_hole": false, 00:16:50.023 "seek_data": false, 00:16:50.023 "copy": true, 00:16:50.023 "nvme_iov_md": false 00:16:50.023 }, 00:16:50.023 "memory_domains": [ 00:16:50.023 { 00:16:50.023 "dma_device_id": "system", 00:16:50.023 "dma_device_type": 1 00:16:50.023 }, 00:16:50.023 { 00:16:50.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.023 "dma_device_type": 2 00:16:50.023 } 00:16:50.023 ], 00:16:50.023 "driver_specific": {} 00:16:50.023 } 00:16:50.023 ] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.023 BaseBdev4 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.023 [ 00:16:50.023 { 00:16:50.023 "name": "BaseBdev4", 00:16:50.023 "aliases": [ 00:16:50.023 "8f65b829-2f68-48e6-94b2-b9f009e2ebf4" 00:16:50.023 ], 00:16:50.023 "product_name": "Malloc disk", 00:16:50.023 "block_size": 512, 00:16:50.023 "num_blocks": 65536, 00:16:50.023 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:50.023 "assigned_rate_limits": { 00:16:50.023 "rw_ios_per_sec": 0, 00:16:50.023 "rw_mbytes_per_sec": 0, 00:16:50.023 "r_mbytes_per_sec": 0, 00:16:50.023 "w_mbytes_per_sec": 0 00:16:50.023 }, 00:16:50.023 "claimed": false, 00:16:50.023 "zoned": false, 00:16:50.023 "supported_io_types": { 00:16:50.023 "read": true, 00:16:50.023 "write": true, 00:16:50.023 "unmap": true, 00:16:50.023 "flush": true, 00:16:50.023 "reset": true, 00:16:50.023 "nvme_admin": false, 00:16:50.023 "nvme_io": false, 00:16:50.023 "nvme_io_md": false, 00:16:50.023 "write_zeroes": true, 00:16:50.023 "zcopy": true, 00:16:50.023 "get_zone_info": false, 00:16:50.023 "zone_management": false, 00:16:50.023 "zone_append": false, 00:16:50.023 "compare": false, 00:16:50.023 "compare_and_write": false, 00:16:50.023 "abort": true, 00:16:50.023 "seek_hole": false, 00:16:50.023 "seek_data": false, 00:16:50.023 "copy": true, 00:16:50.023 "nvme_iov_md": false 00:16:50.023 }, 00:16:50.023 "memory_domains": [ 00:16:50.023 { 00:16:50.023 "dma_device_id": "system", 00:16:50.023 "dma_device_type": 1 00:16:50.023 }, 00:16:50.023 { 00:16:50.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.023 "dma_device_type": 2 00:16:50.023 } 00:16:50.023 ], 00:16:50.023 "driver_specific": {} 00:16:50.023 } 00:16:50.023 ] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:50.023 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.024 [2024-11-20 09:29:15.420931] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.024 [2024-11-20 09:29:15.421087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.024 [2024-11-20 09:29:15.421159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.024 [2024-11-20 09:29:15.423560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.024 [2024-11-20 09:29:15.423711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.024 "name": "Existed_Raid", 00:16:50.024 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:50.024 "strip_size_kb": 64, 00:16:50.024 "state": "configuring", 00:16:50.024 "raid_level": "raid5f", 00:16:50.024 "superblock": true, 00:16:50.024 "num_base_bdevs": 4, 00:16:50.024 "num_base_bdevs_discovered": 3, 00:16:50.024 "num_base_bdevs_operational": 4, 00:16:50.024 "base_bdevs_list": [ 00:16:50.024 { 00:16:50.024 "name": "BaseBdev1", 00:16:50.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.024 "is_configured": false, 00:16:50.024 "data_offset": 0, 00:16:50.024 "data_size": 0 00:16:50.024 }, 00:16:50.024 { 00:16:50.024 "name": "BaseBdev2", 00:16:50.024 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:50.024 "is_configured": true, 00:16:50.024 "data_offset": 2048, 00:16:50.024 "data_size": 63488 00:16:50.024 }, 00:16:50.024 { 00:16:50.024 "name": "BaseBdev3", 00:16:50.024 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:50.024 "is_configured": true, 00:16:50.024 "data_offset": 2048, 00:16:50.024 "data_size": 63488 00:16:50.024 }, 00:16:50.024 { 00:16:50.024 "name": "BaseBdev4", 00:16:50.024 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:50.024 "is_configured": true, 00:16:50.024 "data_offset": 2048, 00:16:50.024 "data_size": 63488 00:16:50.024 } 00:16:50.024 ] 00:16:50.024 }' 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.024 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.589 [2024-11-20 09:29:15.832313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.589 "name": "Existed_Raid", 00:16:50.589 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:50.589 "strip_size_kb": 64, 00:16:50.589 "state": "configuring", 00:16:50.589 "raid_level": "raid5f", 00:16:50.589 "superblock": true, 00:16:50.589 "num_base_bdevs": 4, 00:16:50.589 "num_base_bdevs_discovered": 2, 00:16:50.589 "num_base_bdevs_operational": 4, 00:16:50.589 "base_bdevs_list": [ 00:16:50.589 { 00:16:50.589 "name": "BaseBdev1", 00:16:50.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.589 "is_configured": false, 00:16:50.589 "data_offset": 0, 00:16:50.589 "data_size": 0 00:16:50.589 }, 00:16:50.589 { 00:16:50.589 "name": null, 00:16:50.589 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:50.589 "is_configured": false, 00:16:50.589 "data_offset": 0, 00:16:50.589 "data_size": 63488 00:16:50.589 }, 00:16:50.589 { 00:16:50.589 "name": "BaseBdev3", 00:16:50.589 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:50.589 "is_configured": true, 00:16:50.589 "data_offset": 2048, 00:16:50.589 "data_size": 63488 00:16:50.589 }, 00:16:50.589 { 00:16:50.589 "name": "BaseBdev4", 00:16:50.589 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:50.589 "is_configured": true, 00:16:50.589 "data_offset": 2048, 00:16:50.589 "data_size": 63488 00:16:50.589 } 00:16:50.589 ] 00:16:50.589 }' 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.589 09:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.859 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.859 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.859 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.859 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.859 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.117 [2024-11-20 09:29:16.382632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.117 BaseBdev1 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.117 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.118 [ 00:16:51.118 { 00:16:51.118 "name": "BaseBdev1", 00:16:51.118 "aliases": [ 00:16:51.118 "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1" 00:16:51.118 ], 00:16:51.118 "product_name": "Malloc disk", 00:16:51.118 "block_size": 512, 00:16:51.118 "num_blocks": 65536, 00:16:51.118 "uuid": "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1", 00:16:51.118 "assigned_rate_limits": { 00:16:51.118 "rw_ios_per_sec": 0, 00:16:51.118 "rw_mbytes_per_sec": 0, 00:16:51.118 "r_mbytes_per_sec": 0, 00:16:51.118 "w_mbytes_per_sec": 0 00:16:51.118 }, 00:16:51.118 "claimed": true, 00:16:51.118 "claim_type": "exclusive_write", 00:16:51.118 "zoned": false, 00:16:51.118 "supported_io_types": { 00:16:51.118 "read": true, 00:16:51.118 "write": true, 00:16:51.118 "unmap": true, 00:16:51.118 "flush": true, 00:16:51.118 "reset": true, 00:16:51.118 "nvme_admin": false, 00:16:51.118 "nvme_io": false, 00:16:51.118 "nvme_io_md": false, 00:16:51.118 "write_zeroes": true, 00:16:51.118 "zcopy": true, 00:16:51.118 "get_zone_info": false, 00:16:51.118 "zone_management": false, 00:16:51.118 "zone_append": false, 00:16:51.118 "compare": false, 00:16:51.118 "compare_and_write": false, 00:16:51.118 "abort": true, 00:16:51.118 "seek_hole": false, 00:16:51.118 "seek_data": false, 00:16:51.118 "copy": true, 00:16:51.118 "nvme_iov_md": false 00:16:51.118 }, 00:16:51.118 "memory_domains": [ 00:16:51.118 { 00:16:51.118 "dma_device_id": "system", 00:16:51.118 "dma_device_type": 1 00:16:51.118 }, 00:16:51.118 { 00:16:51.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.118 "dma_device_type": 2 00:16:51.118 } 00:16:51.118 ], 00:16:51.118 "driver_specific": {} 00:16:51.118 } 00:16:51.118 ] 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.118 "name": "Existed_Raid", 00:16:51.118 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:51.118 "strip_size_kb": 64, 00:16:51.118 "state": "configuring", 00:16:51.118 "raid_level": "raid5f", 00:16:51.118 "superblock": true, 00:16:51.118 "num_base_bdevs": 4, 00:16:51.118 "num_base_bdevs_discovered": 3, 00:16:51.118 "num_base_bdevs_operational": 4, 00:16:51.118 "base_bdevs_list": [ 00:16:51.118 { 00:16:51.118 "name": "BaseBdev1", 00:16:51.118 "uuid": "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1", 00:16:51.118 "is_configured": true, 00:16:51.118 "data_offset": 2048, 00:16:51.118 "data_size": 63488 00:16:51.118 }, 00:16:51.118 { 00:16:51.118 "name": null, 00:16:51.118 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:51.118 "is_configured": false, 00:16:51.118 "data_offset": 0, 00:16:51.118 "data_size": 63488 00:16:51.118 }, 00:16:51.118 { 00:16:51.118 "name": "BaseBdev3", 00:16:51.118 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:51.118 "is_configured": true, 00:16:51.118 "data_offset": 2048, 00:16:51.118 "data_size": 63488 00:16:51.118 }, 00:16:51.118 { 00:16:51.118 "name": "BaseBdev4", 00:16:51.118 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:51.118 "is_configured": true, 00:16:51.118 "data_offset": 2048, 00:16:51.118 "data_size": 63488 00:16:51.118 } 00:16:51.118 ] 00:16:51.118 }' 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.118 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.684 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.685 [2024-11-20 09:29:16.914048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.685 "name": "Existed_Raid", 00:16:51.685 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:51.685 "strip_size_kb": 64, 00:16:51.685 "state": "configuring", 00:16:51.685 "raid_level": "raid5f", 00:16:51.685 "superblock": true, 00:16:51.685 "num_base_bdevs": 4, 00:16:51.685 "num_base_bdevs_discovered": 2, 00:16:51.685 "num_base_bdevs_operational": 4, 00:16:51.685 "base_bdevs_list": [ 00:16:51.685 { 00:16:51.685 "name": "BaseBdev1", 00:16:51.685 "uuid": "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1", 00:16:51.685 "is_configured": true, 00:16:51.685 "data_offset": 2048, 00:16:51.685 "data_size": 63488 00:16:51.685 }, 00:16:51.685 { 00:16:51.685 "name": null, 00:16:51.685 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:51.685 "is_configured": false, 00:16:51.685 "data_offset": 0, 00:16:51.685 "data_size": 63488 00:16:51.685 }, 00:16:51.685 { 00:16:51.685 "name": null, 00:16:51.685 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:51.685 "is_configured": false, 00:16:51.685 "data_offset": 0, 00:16:51.685 "data_size": 63488 00:16:51.685 }, 00:16:51.685 { 00:16:51.685 "name": "BaseBdev4", 00:16:51.685 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:51.685 "is_configured": true, 00:16:51.685 "data_offset": 2048, 00:16:51.685 "data_size": 63488 00:16:51.685 } 00:16:51.685 ] 00:16:51.685 }' 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.685 09:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.944 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.944 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:51.944 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.944 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.202 [2024-11-20 09:29:17.441644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.202 "name": "Existed_Raid", 00:16:52.202 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:52.202 "strip_size_kb": 64, 00:16:52.202 "state": "configuring", 00:16:52.202 "raid_level": "raid5f", 00:16:52.202 "superblock": true, 00:16:52.202 "num_base_bdevs": 4, 00:16:52.202 "num_base_bdevs_discovered": 3, 00:16:52.202 "num_base_bdevs_operational": 4, 00:16:52.202 "base_bdevs_list": [ 00:16:52.202 { 00:16:52.202 "name": "BaseBdev1", 00:16:52.202 "uuid": "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1", 00:16:52.202 "is_configured": true, 00:16:52.202 "data_offset": 2048, 00:16:52.202 "data_size": 63488 00:16:52.202 }, 00:16:52.202 { 00:16:52.202 "name": null, 00:16:52.202 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:52.202 "is_configured": false, 00:16:52.202 "data_offset": 0, 00:16:52.202 "data_size": 63488 00:16:52.202 }, 00:16:52.202 { 00:16:52.202 "name": "BaseBdev3", 00:16:52.202 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:52.202 "is_configured": true, 00:16:52.202 "data_offset": 2048, 00:16:52.202 "data_size": 63488 00:16:52.202 }, 00:16:52.202 { 00:16:52.202 "name": "BaseBdev4", 00:16:52.202 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:52.202 "is_configured": true, 00:16:52.202 "data_offset": 2048, 00:16:52.202 "data_size": 63488 00:16:52.202 } 00:16:52.202 ] 00:16:52.202 }' 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.202 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.769 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.769 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:52.769 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.769 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.769 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.769 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:52.769 09:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:52.769 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.769 09:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.769 [2024-11-20 09:29:17.996943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.769 "name": "Existed_Raid", 00:16:52.769 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:52.769 "strip_size_kb": 64, 00:16:52.769 "state": "configuring", 00:16:52.769 "raid_level": "raid5f", 00:16:52.769 "superblock": true, 00:16:52.769 "num_base_bdevs": 4, 00:16:52.769 "num_base_bdevs_discovered": 2, 00:16:52.769 "num_base_bdevs_operational": 4, 00:16:52.769 "base_bdevs_list": [ 00:16:52.769 { 00:16:52.769 "name": null, 00:16:52.769 "uuid": "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1", 00:16:52.769 "is_configured": false, 00:16:52.769 "data_offset": 0, 00:16:52.769 "data_size": 63488 00:16:52.769 }, 00:16:52.769 { 00:16:52.769 "name": null, 00:16:52.769 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:52.769 "is_configured": false, 00:16:52.769 "data_offset": 0, 00:16:52.769 "data_size": 63488 00:16:52.769 }, 00:16:52.769 { 00:16:52.769 "name": "BaseBdev3", 00:16:52.769 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:52.769 "is_configured": true, 00:16:52.769 "data_offset": 2048, 00:16:52.769 "data_size": 63488 00:16:52.769 }, 00:16:52.769 { 00:16:52.769 "name": "BaseBdev4", 00:16:52.769 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:52.769 "is_configured": true, 00:16:52.769 "data_offset": 2048, 00:16:52.769 "data_size": 63488 00:16:52.769 } 00:16:52.769 ] 00:16:52.769 }' 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.769 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.335 [2024-11-20 09:29:18.603954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.335 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.336 "name": "Existed_Raid", 00:16:53.336 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:53.336 "strip_size_kb": 64, 00:16:53.336 "state": "configuring", 00:16:53.336 "raid_level": "raid5f", 00:16:53.336 "superblock": true, 00:16:53.336 "num_base_bdevs": 4, 00:16:53.336 "num_base_bdevs_discovered": 3, 00:16:53.336 "num_base_bdevs_operational": 4, 00:16:53.336 "base_bdevs_list": [ 00:16:53.336 { 00:16:53.336 "name": null, 00:16:53.336 "uuid": "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1", 00:16:53.336 "is_configured": false, 00:16:53.336 "data_offset": 0, 00:16:53.336 "data_size": 63488 00:16:53.336 }, 00:16:53.336 { 00:16:53.336 "name": "BaseBdev2", 00:16:53.336 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:53.336 "is_configured": true, 00:16:53.336 "data_offset": 2048, 00:16:53.336 "data_size": 63488 00:16:53.336 }, 00:16:53.336 { 00:16:53.336 "name": "BaseBdev3", 00:16:53.336 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:53.336 "is_configured": true, 00:16:53.336 "data_offset": 2048, 00:16:53.336 "data_size": 63488 00:16:53.336 }, 00:16:53.336 { 00:16:53.336 "name": "BaseBdev4", 00:16:53.336 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:53.336 "is_configured": true, 00:16:53.336 "data_offset": 2048, 00:16:53.336 "data_size": 63488 00:16:53.336 } 00:16:53.336 ] 00:16:53.336 }' 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.336 09:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.594 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.594 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.594 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.863 [2024-11-20 09:29:19.189944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:53.863 [2024-11-20 09:29:19.190274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:53.863 [2024-11-20 09:29:19.190298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:53.863 NewBaseBdev 00:16:53.863 [2024-11-20 09:29:19.190711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.863 [2024-11-20 09:29:19.199198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:53.863 [2024-11-20 09:29:19.199251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:53.863 [2024-11-20 09:29:19.199616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.863 [ 00:16:53.863 { 00:16:53.863 "name": "NewBaseBdev", 00:16:53.863 "aliases": [ 00:16:53.863 "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1" 00:16:53.863 ], 00:16:53.863 "product_name": "Malloc disk", 00:16:53.863 "block_size": 512, 00:16:53.863 "num_blocks": 65536, 00:16:53.863 "uuid": "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1", 00:16:53.863 "assigned_rate_limits": { 00:16:53.863 "rw_ios_per_sec": 0, 00:16:53.863 "rw_mbytes_per_sec": 0, 00:16:53.863 "r_mbytes_per_sec": 0, 00:16:53.863 "w_mbytes_per_sec": 0 00:16:53.863 }, 00:16:53.863 "claimed": true, 00:16:53.863 "claim_type": "exclusive_write", 00:16:53.863 "zoned": false, 00:16:53.863 "supported_io_types": { 00:16:53.863 "read": true, 00:16:53.863 "write": true, 00:16:53.863 "unmap": true, 00:16:53.863 "flush": true, 00:16:53.863 "reset": true, 00:16:53.863 "nvme_admin": false, 00:16:53.863 "nvme_io": false, 00:16:53.863 "nvme_io_md": false, 00:16:53.863 "write_zeroes": true, 00:16:53.863 "zcopy": true, 00:16:53.863 "get_zone_info": false, 00:16:53.863 "zone_management": false, 00:16:53.863 "zone_append": false, 00:16:53.863 "compare": false, 00:16:53.863 "compare_and_write": false, 00:16:53.863 "abort": true, 00:16:53.863 "seek_hole": false, 00:16:53.863 "seek_data": false, 00:16:53.863 "copy": true, 00:16:53.863 "nvme_iov_md": false 00:16:53.863 }, 00:16:53.863 "memory_domains": [ 00:16:53.863 { 00:16:53.863 "dma_device_id": "system", 00:16:53.863 "dma_device_type": 1 00:16:53.863 }, 00:16:53.863 { 00:16:53.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.863 "dma_device_type": 2 00:16:53.863 } 00:16:53.863 ], 00:16:53.863 "driver_specific": {} 00:16:53.863 } 00:16:53.863 ] 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.863 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.863 "name": "Existed_Raid", 00:16:53.863 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:53.863 "strip_size_kb": 64, 00:16:53.863 "state": "online", 00:16:53.863 "raid_level": "raid5f", 00:16:53.863 "superblock": true, 00:16:53.863 "num_base_bdevs": 4, 00:16:53.863 "num_base_bdevs_discovered": 4, 00:16:53.863 "num_base_bdevs_operational": 4, 00:16:53.863 "base_bdevs_list": [ 00:16:53.863 { 00:16:53.863 "name": "NewBaseBdev", 00:16:53.863 "uuid": "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1", 00:16:53.863 "is_configured": true, 00:16:53.863 "data_offset": 2048, 00:16:53.863 "data_size": 63488 00:16:53.863 }, 00:16:53.863 { 00:16:53.863 "name": "BaseBdev2", 00:16:53.863 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:53.863 "is_configured": true, 00:16:53.863 "data_offset": 2048, 00:16:53.864 "data_size": 63488 00:16:53.864 }, 00:16:53.864 { 00:16:53.864 "name": "BaseBdev3", 00:16:53.864 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:53.864 "is_configured": true, 00:16:53.864 "data_offset": 2048, 00:16:53.864 "data_size": 63488 00:16:53.864 }, 00:16:53.864 { 00:16:53.864 "name": "BaseBdev4", 00:16:53.864 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:53.864 "is_configured": true, 00:16:53.864 "data_offset": 2048, 00:16:53.864 "data_size": 63488 00:16:53.864 } 00:16:53.864 ] 00:16:53.864 }' 00:16:53.864 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.864 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.431 [2024-11-20 09:29:19.688796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.431 "name": "Existed_Raid", 00:16:54.431 "aliases": [ 00:16:54.431 "48fe624c-378f-4eb3-93e9-91f315346af7" 00:16:54.431 ], 00:16:54.431 "product_name": "Raid Volume", 00:16:54.431 "block_size": 512, 00:16:54.431 "num_blocks": 190464, 00:16:54.431 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:54.431 "assigned_rate_limits": { 00:16:54.431 "rw_ios_per_sec": 0, 00:16:54.431 "rw_mbytes_per_sec": 0, 00:16:54.431 "r_mbytes_per_sec": 0, 00:16:54.431 "w_mbytes_per_sec": 0 00:16:54.431 }, 00:16:54.431 "claimed": false, 00:16:54.431 "zoned": false, 00:16:54.431 "supported_io_types": { 00:16:54.431 "read": true, 00:16:54.431 "write": true, 00:16:54.431 "unmap": false, 00:16:54.431 "flush": false, 00:16:54.431 "reset": true, 00:16:54.431 "nvme_admin": false, 00:16:54.431 "nvme_io": false, 00:16:54.431 "nvme_io_md": false, 00:16:54.431 "write_zeroes": true, 00:16:54.431 "zcopy": false, 00:16:54.431 "get_zone_info": false, 00:16:54.431 "zone_management": false, 00:16:54.431 "zone_append": false, 00:16:54.431 "compare": false, 00:16:54.431 "compare_and_write": false, 00:16:54.431 "abort": false, 00:16:54.431 "seek_hole": false, 00:16:54.431 "seek_data": false, 00:16:54.431 "copy": false, 00:16:54.431 "nvme_iov_md": false 00:16:54.431 }, 00:16:54.431 "driver_specific": { 00:16:54.431 "raid": { 00:16:54.431 "uuid": "48fe624c-378f-4eb3-93e9-91f315346af7", 00:16:54.431 "strip_size_kb": 64, 00:16:54.431 "state": "online", 00:16:54.431 "raid_level": "raid5f", 00:16:54.431 "superblock": true, 00:16:54.431 "num_base_bdevs": 4, 00:16:54.431 "num_base_bdevs_discovered": 4, 00:16:54.431 "num_base_bdevs_operational": 4, 00:16:54.431 "base_bdevs_list": [ 00:16:54.431 { 00:16:54.431 "name": "NewBaseBdev", 00:16:54.431 "uuid": "f3ca64a7-1e28-46e1-b8ce-f46f70eacfc1", 00:16:54.431 "is_configured": true, 00:16:54.431 "data_offset": 2048, 00:16:54.431 "data_size": 63488 00:16:54.431 }, 00:16:54.431 { 00:16:54.431 "name": "BaseBdev2", 00:16:54.431 "uuid": "f87e6e91-9e48-4bb0-a98f-529ed8a849d6", 00:16:54.431 "is_configured": true, 00:16:54.431 "data_offset": 2048, 00:16:54.431 "data_size": 63488 00:16:54.431 }, 00:16:54.431 { 00:16:54.431 "name": "BaseBdev3", 00:16:54.431 "uuid": "f9797cff-1ec1-4f95-a487-d87a0ecdad55", 00:16:54.431 "is_configured": true, 00:16:54.431 "data_offset": 2048, 00:16:54.431 "data_size": 63488 00:16:54.431 }, 00:16:54.431 { 00:16:54.431 "name": "BaseBdev4", 00:16:54.431 "uuid": "8f65b829-2f68-48e6-94b2-b9f009e2ebf4", 00:16:54.431 "is_configured": true, 00:16:54.431 "data_offset": 2048, 00:16:54.431 "data_size": 63488 00:16:54.431 } 00:16:54.431 ] 00:16:54.431 } 00:16:54.431 } 00:16:54.431 }' 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.431 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:54.431 BaseBdev2 00:16:54.431 BaseBdev3 00:16:54.431 BaseBdev4' 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.432 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.691 09:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.691 [2024-11-20 09:29:20.011975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.691 [2024-11-20 09:29:20.012080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.691 [2024-11-20 09:29:20.012235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.691 [2024-11-20 09:29:20.012670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.691 [2024-11-20 09:29:20.012757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83860 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83860 ']' 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83860 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83860 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.691 killing process with pid 83860 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83860' 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83860 00:16:54.691 [2024-11-20 09:29:20.061586] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.691 09:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83860 00:16:55.259 [2024-11-20 09:29:20.591322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.632 09:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:56.632 00:16:56.632 real 0m12.645s 00:16:56.632 user 0m19.536s 00:16:56.632 sys 0m2.301s 00:16:56.632 09:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.632 09:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.632 ************************************ 00:16:56.632 END TEST raid5f_state_function_test_sb 00:16:56.632 ************************************ 00:16:56.891 09:29:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:56.891 09:29:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:56.891 09:29:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.891 09:29:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.891 ************************************ 00:16:56.891 START TEST raid5f_superblock_test 00:16:56.891 ************************************ 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84538 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84538 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84538 ']' 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.891 09:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.891 [2024-11-20 09:29:22.254233] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:16:56.891 [2024-11-20 09:29:22.254494] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84538 ] 00:16:57.150 [2024-11-20 09:29:22.430915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.150 [2024-11-20 09:29:22.595166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.408 [2024-11-20 09:29:22.830783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.408 [2024-11-20 09:29:22.830947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.668 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.927 malloc1 00:16:57.927 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 [2024-11-20 09:29:23.175683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.928 [2024-11-20 09:29:23.175809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.928 [2024-11-20 09:29:23.175865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.928 [2024-11-20 09:29:23.175904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.928 [2024-11-20 09:29:23.178379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.928 [2024-11-20 09:29:23.178506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.928 pt1 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 malloc2 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 [2024-11-20 09:29:23.239409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.928 [2024-11-20 09:29:23.239554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.928 [2024-11-20 09:29:23.239600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.928 [2024-11-20 09:29:23.239637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.928 [2024-11-20 09:29:23.242079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.928 [2024-11-20 09:29:23.242165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.928 pt2 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 malloc3 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 [2024-11-20 09:29:23.311785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:57.928 [2024-11-20 09:29:23.311844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.928 [2024-11-20 09:29:23.311868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:57.928 [2024-11-20 09:29:23.311879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.928 [2024-11-20 09:29:23.314164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.928 [2024-11-20 09:29:23.314202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:57.928 pt3 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 malloc4 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 [2024-11-20 09:29:23.369960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:57.928 [2024-11-20 09:29:23.370026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.928 [2024-11-20 09:29:23.370047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:57.928 [2024-11-20 09:29:23.370058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.928 [2024-11-20 09:29:23.372433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.928 [2024-11-20 09:29:23.372485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:57.928 pt4 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.928 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.187 [2024-11-20 09:29:23.381975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.187 [2024-11-20 09:29:23.383985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.187 [2024-11-20 09:29:23.384149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:58.187 [2024-11-20 09:29:23.384232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:58.187 [2024-11-20 09:29:23.384501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:58.187 [2024-11-20 09:29:23.384522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:58.187 [2024-11-20 09:29:23.384829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:58.187 [2024-11-20 09:29:23.393546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:58.187 [2024-11-20 09:29:23.393571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:58.187 [2024-11-20 09:29:23.393841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.187 "name": "raid_bdev1", 00:16:58.187 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:16:58.187 "strip_size_kb": 64, 00:16:58.187 "state": "online", 00:16:58.187 "raid_level": "raid5f", 00:16:58.187 "superblock": true, 00:16:58.187 "num_base_bdevs": 4, 00:16:58.187 "num_base_bdevs_discovered": 4, 00:16:58.187 "num_base_bdevs_operational": 4, 00:16:58.187 "base_bdevs_list": [ 00:16:58.187 { 00:16:58.187 "name": "pt1", 00:16:58.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.187 "is_configured": true, 00:16:58.187 "data_offset": 2048, 00:16:58.187 "data_size": 63488 00:16:58.187 }, 00:16:58.187 { 00:16:58.187 "name": "pt2", 00:16:58.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.187 "is_configured": true, 00:16:58.187 "data_offset": 2048, 00:16:58.187 "data_size": 63488 00:16:58.187 }, 00:16:58.187 { 00:16:58.187 "name": "pt3", 00:16:58.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.187 "is_configured": true, 00:16:58.187 "data_offset": 2048, 00:16:58.187 "data_size": 63488 00:16:58.187 }, 00:16:58.187 { 00:16:58.187 "name": "pt4", 00:16:58.187 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.187 "is_configured": true, 00:16:58.187 "data_offset": 2048, 00:16:58.187 "data_size": 63488 00:16:58.187 } 00:16:58.187 ] 00:16:58.187 }' 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.187 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.445 [2024-11-20 09:29:23.870769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.445 09:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.702 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.702 "name": "raid_bdev1", 00:16:58.702 "aliases": [ 00:16:58.702 "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4" 00:16:58.702 ], 00:16:58.702 "product_name": "Raid Volume", 00:16:58.702 "block_size": 512, 00:16:58.702 "num_blocks": 190464, 00:16:58.702 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:16:58.702 "assigned_rate_limits": { 00:16:58.702 "rw_ios_per_sec": 0, 00:16:58.702 "rw_mbytes_per_sec": 0, 00:16:58.702 "r_mbytes_per_sec": 0, 00:16:58.702 "w_mbytes_per_sec": 0 00:16:58.702 }, 00:16:58.702 "claimed": false, 00:16:58.702 "zoned": false, 00:16:58.702 "supported_io_types": { 00:16:58.702 "read": true, 00:16:58.702 "write": true, 00:16:58.702 "unmap": false, 00:16:58.703 "flush": false, 00:16:58.703 "reset": true, 00:16:58.703 "nvme_admin": false, 00:16:58.703 "nvme_io": false, 00:16:58.703 "nvme_io_md": false, 00:16:58.703 "write_zeroes": true, 00:16:58.703 "zcopy": false, 00:16:58.703 "get_zone_info": false, 00:16:58.703 "zone_management": false, 00:16:58.703 "zone_append": false, 00:16:58.703 "compare": false, 00:16:58.703 "compare_and_write": false, 00:16:58.703 "abort": false, 00:16:58.703 "seek_hole": false, 00:16:58.703 "seek_data": false, 00:16:58.703 "copy": false, 00:16:58.703 "nvme_iov_md": false 00:16:58.703 }, 00:16:58.703 "driver_specific": { 00:16:58.703 "raid": { 00:16:58.703 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:16:58.703 "strip_size_kb": 64, 00:16:58.703 "state": "online", 00:16:58.703 "raid_level": "raid5f", 00:16:58.703 "superblock": true, 00:16:58.703 "num_base_bdevs": 4, 00:16:58.703 "num_base_bdevs_discovered": 4, 00:16:58.703 "num_base_bdevs_operational": 4, 00:16:58.703 "base_bdevs_list": [ 00:16:58.703 { 00:16:58.703 "name": "pt1", 00:16:58.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.703 "is_configured": true, 00:16:58.703 "data_offset": 2048, 00:16:58.703 "data_size": 63488 00:16:58.703 }, 00:16:58.703 { 00:16:58.703 "name": "pt2", 00:16:58.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.703 "is_configured": true, 00:16:58.703 "data_offset": 2048, 00:16:58.703 "data_size": 63488 00:16:58.703 }, 00:16:58.703 { 00:16:58.703 "name": "pt3", 00:16:58.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.703 "is_configured": true, 00:16:58.703 "data_offset": 2048, 00:16:58.703 "data_size": 63488 00:16:58.703 }, 00:16:58.703 { 00:16:58.703 "name": "pt4", 00:16:58.703 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.703 "is_configured": true, 00:16:58.703 "data_offset": 2048, 00:16:58.703 "data_size": 63488 00:16:58.703 } 00:16:58.703 ] 00:16:58.703 } 00:16:58.703 } 00:16:58.703 }' 00:16:58.703 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.703 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:58.703 pt2 00:16:58.703 pt3 00:16:58.703 pt4' 00:16:58.703 09:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.703 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:58.961 [2024-11-20 09:29:24.226163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3e41ce29-aa38-4ecd-bbcc-04de4b6225a4 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3e41ce29-aa38-4ecd-bbcc-04de4b6225a4 ']' 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 [2024-11-20 09:29:24.269884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.961 [2024-11-20 09:29:24.269979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.961 [2024-11-20 09:29:24.270106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.961 [2024-11-20 09:29:24.270241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.961 [2024-11-20 09:29:24.270301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:58.961 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.219 [2024-11-20 09:29:24.425650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:59.219 [2024-11-20 09:29:24.427877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:59.219 [2024-11-20 09:29:24.427985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:59.219 [2024-11-20 09:29:24.428057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:59.219 [2024-11-20 09:29:24.428150] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:59.219 [2024-11-20 09:29:24.428261] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:59.219 [2024-11-20 09:29:24.428330] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:59.219 [2024-11-20 09:29:24.428405] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:59.219 [2024-11-20 09:29:24.428488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.219 [2024-11-20 09:29:24.428526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:59.219 request: 00:16:59.219 { 00:16:59.219 "name": "raid_bdev1", 00:16:59.219 "raid_level": "raid5f", 00:16:59.219 "base_bdevs": [ 00:16:59.219 "malloc1", 00:16:59.219 "malloc2", 00:16:59.219 "malloc3", 00:16:59.219 "malloc4" 00:16:59.219 ], 00:16:59.219 "strip_size_kb": 64, 00:16:59.219 "superblock": false, 00:16:59.219 "method": "bdev_raid_create", 00:16:59.219 "req_id": 1 00:16:59.219 } 00:16:59.219 Got JSON-RPC error response 00:16:59.219 response: 00:16:59.219 { 00:16:59.219 "code": -17, 00:16:59.219 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:59.219 } 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.219 [2024-11-20 09:29:24.497495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.219 [2024-11-20 09:29:24.497618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.219 [2024-11-20 09:29:24.497656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:59.219 [2024-11-20 09:29:24.497698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.219 [2024-11-20 09:29:24.500118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.219 [2024-11-20 09:29:24.500208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.219 [2024-11-20 09:29:24.500337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:59.219 [2024-11-20 09:29:24.500456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.219 pt1 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.219 "name": "raid_bdev1", 00:16:59.219 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:16:59.219 "strip_size_kb": 64, 00:16:59.219 "state": "configuring", 00:16:59.219 "raid_level": "raid5f", 00:16:59.219 "superblock": true, 00:16:59.219 "num_base_bdevs": 4, 00:16:59.219 "num_base_bdevs_discovered": 1, 00:16:59.219 "num_base_bdevs_operational": 4, 00:16:59.219 "base_bdevs_list": [ 00:16:59.219 { 00:16:59.219 "name": "pt1", 00:16:59.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.219 "is_configured": true, 00:16:59.219 "data_offset": 2048, 00:16:59.219 "data_size": 63488 00:16:59.219 }, 00:16:59.219 { 00:16:59.219 "name": null, 00:16:59.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.219 "is_configured": false, 00:16:59.219 "data_offset": 2048, 00:16:59.219 "data_size": 63488 00:16:59.219 }, 00:16:59.219 { 00:16:59.219 "name": null, 00:16:59.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.219 "is_configured": false, 00:16:59.219 "data_offset": 2048, 00:16:59.219 "data_size": 63488 00:16:59.219 }, 00:16:59.219 { 00:16:59.219 "name": null, 00:16:59.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.219 "is_configured": false, 00:16:59.219 "data_offset": 2048, 00:16:59.219 "data_size": 63488 00:16:59.219 } 00:16:59.219 ] 00:16:59.219 }' 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.219 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.785 [2024-11-20 09:29:24.972684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.785 [2024-11-20 09:29:24.972770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.785 [2024-11-20 09:29:24.972791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:59.785 [2024-11-20 09:29:24.972803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.785 [2024-11-20 09:29:24.973285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.785 [2024-11-20 09:29:24.973306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.785 [2024-11-20 09:29:24.973396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.785 [2024-11-20 09:29:24.973424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.785 pt2 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.785 [2024-11-20 09:29:24.980684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.785 09:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.785 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.785 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.785 "name": "raid_bdev1", 00:16:59.785 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:16:59.785 "strip_size_kb": 64, 00:16:59.785 "state": "configuring", 00:16:59.785 "raid_level": "raid5f", 00:16:59.785 "superblock": true, 00:16:59.785 "num_base_bdevs": 4, 00:16:59.785 "num_base_bdevs_discovered": 1, 00:16:59.785 "num_base_bdevs_operational": 4, 00:16:59.785 "base_bdevs_list": [ 00:16:59.785 { 00:16:59.785 "name": "pt1", 00:16:59.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.785 "is_configured": true, 00:16:59.785 "data_offset": 2048, 00:16:59.785 "data_size": 63488 00:16:59.785 }, 00:16:59.785 { 00:16:59.785 "name": null, 00:16:59.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.785 "is_configured": false, 00:16:59.785 "data_offset": 0, 00:16:59.785 "data_size": 63488 00:16:59.785 }, 00:16:59.785 { 00:16:59.785 "name": null, 00:16:59.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.785 "is_configured": false, 00:16:59.785 "data_offset": 2048, 00:16:59.785 "data_size": 63488 00:16:59.785 }, 00:16:59.785 { 00:16:59.785 "name": null, 00:16:59.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.785 "is_configured": false, 00:16:59.785 "data_offset": 2048, 00:16:59.785 "data_size": 63488 00:16:59.785 } 00:16:59.785 ] 00:16:59.785 }' 00:16:59.785 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.785 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 [2024-11-20 09:29:25.443916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.044 [2024-11-20 09:29:25.444035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.044 [2024-11-20 09:29:25.444062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:00.044 [2024-11-20 09:29:25.444072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.044 [2024-11-20 09:29:25.444613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.044 [2024-11-20 09:29:25.444634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.044 [2024-11-20 09:29:25.444727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:00.044 [2024-11-20 09:29:25.444751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.044 pt2 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 [2024-11-20 09:29:25.455866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:00.044 [2024-11-20 09:29:25.455920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.044 [2024-11-20 09:29:25.455941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:00.044 [2024-11-20 09:29:25.455950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.044 [2024-11-20 09:29:25.456376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.044 [2024-11-20 09:29:25.456393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:00.044 [2024-11-20 09:29:25.456488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:00.044 [2024-11-20 09:29:25.456510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:00.044 pt3 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 [2024-11-20 09:29:25.467822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:00.044 [2024-11-20 09:29:25.467872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.044 [2024-11-20 09:29:25.467892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:00.044 [2024-11-20 09:29:25.467901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.044 [2024-11-20 09:29:25.468322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.044 [2024-11-20 09:29:25.468339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:00.044 [2024-11-20 09:29:25.468409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:00.044 [2024-11-20 09:29:25.468437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:00.044 [2024-11-20 09:29:25.468586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:00.044 [2024-11-20 09:29:25.468595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:00.044 [2024-11-20 09:29:25.468854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:00.044 [2024-11-20 09:29:25.476967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:00.044 [2024-11-20 09:29:25.476995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:00.044 [2024-11-20 09:29:25.477231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.044 pt4 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.044 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.302 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.302 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.303 "name": "raid_bdev1", 00:17:00.303 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:17:00.303 "strip_size_kb": 64, 00:17:00.303 "state": "online", 00:17:00.303 "raid_level": "raid5f", 00:17:00.303 "superblock": true, 00:17:00.303 "num_base_bdevs": 4, 00:17:00.303 "num_base_bdevs_discovered": 4, 00:17:00.303 "num_base_bdevs_operational": 4, 00:17:00.303 "base_bdevs_list": [ 00:17:00.303 { 00:17:00.303 "name": "pt1", 00:17:00.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.303 "is_configured": true, 00:17:00.303 "data_offset": 2048, 00:17:00.303 "data_size": 63488 00:17:00.303 }, 00:17:00.303 { 00:17:00.303 "name": "pt2", 00:17:00.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.303 "is_configured": true, 00:17:00.303 "data_offset": 2048, 00:17:00.303 "data_size": 63488 00:17:00.303 }, 00:17:00.303 { 00:17:00.303 "name": "pt3", 00:17:00.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.303 "is_configured": true, 00:17:00.303 "data_offset": 2048, 00:17:00.303 "data_size": 63488 00:17:00.303 }, 00:17:00.303 { 00:17:00.303 "name": "pt4", 00:17:00.303 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:00.303 "is_configured": true, 00:17:00.303 "data_offset": 2048, 00:17:00.303 "data_size": 63488 00:17:00.303 } 00:17:00.303 ] 00:17:00.303 }' 00:17:00.303 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.303 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.562 [2024-11-20 09:29:25.966622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.562 09:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.562 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:00.562 "name": "raid_bdev1", 00:17:00.562 "aliases": [ 00:17:00.562 "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4" 00:17:00.562 ], 00:17:00.562 "product_name": "Raid Volume", 00:17:00.562 "block_size": 512, 00:17:00.562 "num_blocks": 190464, 00:17:00.562 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:17:00.562 "assigned_rate_limits": { 00:17:00.562 "rw_ios_per_sec": 0, 00:17:00.562 "rw_mbytes_per_sec": 0, 00:17:00.562 "r_mbytes_per_sec": 0, 00:17:00.562 "w_mbytes_per_sec": 0 00:17:00.562 }, 00:17:00.562 "claimed": false, 00:17:00.562 "zoned": false, 00:17:00.562 "supported_io_types": { 00:17:00.562 "read": true, 00:17:00.562 "write": true, 00:17:00.562 "unmap": false, 00:17:00.562 "flush": false, 00:17:00.562 "reset": true, 00:17:00.562 "nvme_admin": false, 00:17:00.562 "nvme_io": false, 00:17:00.562 "nvme_io_md": false, 00:17:00.562 "write_zeroes": true, 00:17:00.562 "zcopy": false, 00:17:00.562 "get_zone_info": false, 00:17:00.562 "zone_management": false, 00:17:00.562 "zone_append": false, 00:17:00.562 "compare": false, 00:17:00.562 "compare_and_write": false, 00:17:00.562 "abort": false, 00:17:00.562 "seek_hole": false, 00:17:00.562 "seek_data": false, 00:17:00.562 "copy": false, 00:17:00.562 "nvme_iov_md": false 00:17:00.562 }, 00:17:00.562 "driver_specific": { 00:17:00.562 "raid": { 00:17:00.562 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:17:00.562 "strip_size_kb": 64, 00:17:00.562 "state": "online", 00:17:00.562 "raid_level": "raid5f", 00:17:00.562 "superblock": true, 00:17:00.562 "num_base_bdevs": 4, 00:17:00.562 "num_base_bdevs_discovered": 4, 00:17:00.562 "num_base_bdevs_operational": 4, 00:17:00.562 "base_bdevs_list": [ 00:17:00.562 { 00:17:00.562 "name": "pt1", 00:17:00.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.562 "is_configured": true, 00:17:00.562 "data_offset": 2048, 00:17:00.562 "data_size": 63488 00:17:00.562 }, 00:17:00.562 { 00:17:00.562 "name": "pt2", 00:17:00.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.562 "is_configured": true, 00:17:00.562 "data_offset": 2048, 00:17:00.562 "data_size": 63488 00:17:00.562 }, 00:17:00.562 { 00:17:00.562 "name": "pt3", 00:17:00.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:00.562 "is_configured": true, 00:17:00.562 "data_offset": 2048, 00:17:00.562 "data_size": 63488 00:17:00.562 }, 00:17:00.562 { 00:17:00.562 "name": "pt4", 00:17:00.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:00.562 "is_configured": true, 00:17:00.562 "data_offset": 2048, 00:17:00.562 "data_size": 63488 00:17:00.562 } 00:17:00.562 ] 00:17:00.562 } 00:17:00.562 } 00:17:00.562 }' 00:17:00.562 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:00.821 pt2 00:17:00.821 pt3 00:17:00.821 pt4' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.821 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.821 [2024-11-20 09:29:26.274051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3e41ce29-aa38-4ecd-bbcc-04de4b6225a4 '!=' 3e41ce29-aa38-4ecd-bbcc-04de4b6225a4 ']' 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.080 [2024-11-20 09:29:26.301868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.080 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.080 "name": "raid_bdev1", 00:17:01.080 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:17:01.080 "strip_size_kb": 64, 00:17:01.080 "state": "online", 00:17:01.080 "raid_level": "raid5f", 00:17:01.080 "superblock": true, 00:17:01.080 "num_base_bdevs": 4, 00:17:01.080 "num_base_bdevs_discovered": 3, 00:17:01.080 "num_base_bdevs_operational": 3, 00:17:01.080 "base_bdevs_list": [ 00:17:01.080 { 00:17:01.080 "name": null, 00:17:01.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.080 "is_configured": false, 00:17:01.080 "data_offset": 0, 00:17:01.080 "data_size": 63488 00:17:01.081 }, 00:17:01.081 { 00:17:01.081 "name": "pt2", 00:17:01.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.081 "is_configured": true, 00:17:01.081 "data_offset": 2048, 00:17:01.081 "data_size": 63488 00:17:01.081 }, 00:17:01.081 { 00:17:01.081 "name": "pt3", 00:17:01.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.081 "is_configured": true, 00:17:01.081 "data_offset": 2048, 00:17:01.081 "data_size": 63488 00:17:01.081 }, 00:17:01.081 { 00:17:01.081 "name": "pt4", 00:17:01.081 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:01.081 "is_configured": true, 00:17:01.081 "data_offset": 2048, 00:17:01.081 "data_size": 63488 00:17:01.081 } 00:17:01.081 ] 00:17:01.081 }' 00:17:01.081 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.081 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.340 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.340 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.340 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.340 [2024-11-20 09:29:26.781022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.340 [2024-11-20 09:29:26.781117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.340 [2024-11-20 09:29:26.781235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.340 [2024-11-20 09:29:26.781358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.340 [2024-11-20 09:29:26.781415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:01.340 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.340 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:01.340 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.340 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.340 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.598 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.599 [2024-11-20 09:29:26.856871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.599 [2024-11-20 09:29:26.856982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.599 [2024-11-20 09:29:26.857029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:01.599 [2024-11-20 09:29:26.857062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.599 [2024-11-20 09:29:26.859509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.599 [2024-11-20 09:29:26.859587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.599 [2024-11-20 09:29:26.859712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:01.599 [2024-11-20 09:29:26.859790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.599 pt2 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.599 "name": "raid_bdev1", 00:17:01.599 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:17:01.599 "strip_size_kb": 64, 00:17:01.599 "state": "configuring", 00:17:01.599 "raid_level": "raid5f", 00:17:01.599 "superblock": true, 00:17:01.599 "num_base_bdevs": 4, 00:17:01.599 "num_base_bdevs_discovered": 1, 00:17:01.599 "num_base_bdevs_operational": 3, 00:17:01.599 "base_bdevs_list": [ 00:17:01.599 { 00:17:01.599 "name": null, 00:17:01.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.599 "is_configured": false, 00:17:01.599 "data_offset": 2048, 00:17:01.599 "data_size": 63488 00:17:01.599 }, 00:17:01.599 { 00:17:01.599 "name": "pt2", 00:17:01.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.599 "is_configured": true, 00:17:01.599 "data_offset": 2048, 00:17:01.599 "data_size": 63488 00:17:01.599 }, 00:17:01.599 { 00:17:01.599 "name": null, 00:17:01.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.599 "is_configured": false, 00:17:01.599 "data_offset": 2048, 00:17:01.599 "data_size": 63488 00:17:01.599 }, 00:17:01.599 { 00:17:01.599 "name": null, 00:17:01.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:01.599 "is_configured": false, 00:17:01.599 "data_offset": 2048, 00:17:01.599 "data_size": 63488 00:17:01.599 } 00:17:01.599 ] 00:17:01.599 }' 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.599 09:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.858 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:01.858 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:01.858 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.858 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.858 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.116 [2024-11-20 09:29:27.316160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:02.116 [2024-11-20 09:29:27.316289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.116 [2024-11-20 09:29:27.316333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:02.116 [2024-11-20 09:29:27.316345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.116 [2024-11-20 09:29:27.316854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.116 [2024-11-20 09:29:27.316875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:02.116 [2024-11-20 09:29:27.316968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:02.116 [2024-11-20 09:29:27.317000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:02.116 pt3 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.116 "name": "raid_bdev1", 00:17:02.116 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:17:02.116 "strip_size_kb": 64, 00:17:02.116 "state": "configuring", 00:17:02.116 "raid_level": "raid5f", 00:17:02.116 "superblock": true, 00:17:02.116 "num_base_bdevs": 4, 00:17:02.116 "num_base_bdevs_discovered": 2, 00:17:02.116 "num_base_bdevs_operational": 3, 00:17:02.116 "base_bdevs_list": [ 00:17:02.116 { 00:17:02.116 "name": null, 00:17:02.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.116 "is_configured": false, 00:17:02.116 "data_offset": 2048, 00:17:02.116 "data_size": 63488 00:17:02.116 }, 00:17:02.116 { 00:17:02.116 "name": "pt2", 00:17:02.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.116 "is_configured": true, 00:17:02.116 "data_offset": 2048, 00:17:02.116 "data_size": 63488 00:17:02.116 }, 00:17:02.116 { 00:17:02.116 "name": "pt3", 00:17:02.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.116 "is_configured": true, 00:17:02.116 "data_offset": 2048, 00:17:02.116 "data_size": 63488 00:17:02.116 }, 00:17:02.116 { 00:17:02.116 "name": null, 00:17:02.116 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.116 "is_configured": false, 00:17:02.116 "data_offset": 2048, 00:17:02.116 "data_size": 63488 00:17:02.116 } 00:17:02.116 ] 00:17:02.116 }' 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.116 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.375 [2024-11-20 09:29:27.799359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:02.375 [2024-11-20 09:29:27.799525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.375 [2024-11-20 09:29:27.799580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:02.375 [2024-11-20 09:29:27.799618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.375 [2024-11-20 09:29:27.800152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.375 [2024-11-20 09:29:27.800217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:02.375 [2024-11-20 09:29:27.800352] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:02.375 [2024-11-20 09:29:27.800423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:02.375 [2024-11-20 09:29:27.800620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:02.375 [2024-11-20 09:29:27.800664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:02.375 [2024-11-20 09:29:27.800946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:02.375 [2024-11-20 09:29:27.808800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:02.375 [2024-11-20 09:29:27.808870] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:02.375 [2024-11-20 09:29:27.809256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.375 pt4 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.375 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.634 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.634 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.634 "name": "raid_bdev1", 00:17:02.634 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:17:02.634 "strip_size_kb": 64, 00:17:02.634 "state": "online", 00:17:02.634 "raid_level": "raid5f", 00:17:02.634 "superblock": true, 00:17:02.634 "num_base_bdevs": 4, 00:17:02.634 "num_base_bdevs_discovered": 3, 00:17:02.634 "num_base_bdevs_operational": 3, 00:17:02.634 "base_bdevs_list": [ 00:17:02.634 { 00:17:02.634 "name": null, 00:17:02.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.634 "is_configured": false, 00:17:02.634 "data_offset": 2048, 00:17:02.634 "data_size": 63488 00:17:02.634 }, 00:17:02.634 { 00:17:02.634 "name": "pt2", 00:17:02.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.634 "is_configured": true, 00:17:02.634 "data_offset": 2048, 00:17:02.634 "data_size": 63488 00:17:02.634 }, 00:17:02.634 { 00:17:02.634 "name": "pt3", 00:17:02.634 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.634 "is_configured": true, 00:17:02.634 "data_offset": 2048, 00:17:02.634 "data_size": 63488 00:17:02.634 }, 00:17:02.634 { 00:17:02.634 "name": "pt4", 00:17:02.634 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.634 "is_configured": true, 00:17:02.634 "data_offset": 2048, 00:17:02.634 "data_size": 63488 00:17:02.634 } 00:17:02.634 ] 00:17:02.634 }' 00:17:02.634 09:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.634 09:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.895 [2024-11-20 09:29:28.239623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.895 [2024-11-20 09:29:28.239720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.895 [2024-11-20 09:29:28.239820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.895 [2024-11-20 09:29:28.239909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.895 [2024-11-20 09:29:28.239925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.895 [2024-11-20 09:29:28.315526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.895 [2024-11-20 09:29:28.315613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.895 [2024-11-20 09:29:28.315646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:02.895 [2024-11-20 09:29:28.315662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.895 [2024-11-20 09:29:28.318541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.895 [2024-11-20 09:29:28.318606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.895 [2024-11-20 09:29:28.318744] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:02.895 [2024-11-20 09:29:28.318817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.895 [2024-11-20 09:29:28.319011] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:02.895 [2024-11-20 09:29:28.319035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.895 [2024-11-20 09:29:28.319056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:02.895 [2024-11-20 09:29:28.319154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.895 [2024-11-20 09:29:28.319306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:02.895 pt1 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.895 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.154 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.154 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.154 "name": "raid_bdev1", 00:17:03.154 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:17:03.154 "strip_size_kb": 64, 00:17:03.154 "state": "configuring", 00:17:03.154 "raid_level": "raid5f", 00:17:03.154 "superblock": true, 00:17:03.154 "num_base_bdevs": 4, 00:17:03.154 "num_base_bdevs_discovered": 2, 00:17:03.154 "num_base_bdevs_operational": 3, 00:17:03.154 "base_bdevs_list": [ 00:17:03.154 { 00:17:03.154 "name": null, 00:17:03.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.154 "is_configured": false, 00:17:03.154 "data_offset": 2048, 00:17:03.154 "data_size": 63488 00:17:03.154 }, 00:17:03.154 { 00:17:03.154 "name": "pt2", 00:17:03.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.154 "is_configured": true, 00:17:03.154 "data_offset": 2048, 00:17:03.154 "data_size": 63488 00:17:03.154 }, 00:17:03.154 { 00:17:03.154 "name": "pt3", 00:17:03.154 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.154 "is_configured": true, 00:17:03.154 "data_offset": 2048, 00:17:03.154 "data_size": 63488 00:17:03.154 }, 00:17:03.154 { 00:17:03.154 "name": null, 00:17:03.154 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.154 "is_configured": false, 00:17:03.154 "data_offset": 2048, 00:17:03.154 "data_size": 63488 00:17:03.154 } 00:17:03.154 ] 00:17:03.154 }' 00:17:03.154 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.154 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.413 [2024-11-20 09:29:28.830870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:03.413 [2024-11-20 09:29:28.830945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.413 [2024-11-20 09:29:28.830975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:03.413 [2024-11-20 09:29:28.830987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.413 [2024-11-20 09:29:28.831560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.413 [2024-11-20 09:29:28.831597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:03.413 [2024-11-20 09:29:28.831702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:03.413 [2024-11-20 09:29:28.831742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:03.413 [2024-11-20 09:29:28.831922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:03.413 [2024-11-20 09:29:28.831934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:03.413 [2024-11-20 09:29:28.832251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:03.413 [2024-11-20 09:29:28.841811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:03.413 pt4 00:17:03.413 [2024-11-20 09:29:28.841896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:03.413 [2024-11-20 09:29:28.842232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.413 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.414 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.414 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.414 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.414 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.414 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.414 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.414 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.414 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.414 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.673 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.673 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.673 "name": "raid_bdev1", 00:17:03.673 "uuid": "3e41ce29-aa38-4ecd-bbcc-04de4b6225a4", 00:17:03.673 "strip_size_kb": 64, 00:17:03.673 "state": "online", 00:17:03.673 "raid_level": "raid5f", 00:17:03.673 "superblock": true, 00:17:03.673 "num_base_bdevs": 4, 00:17:03.673 "num_base_bdevs_discovered": 3, 00:17:03.673 "num_base_bdevs_operational": 3, 00:17:03.673 "base_bdevs_list": [ 00:17:03.673 { 00:17:03.673 "name": null, 00:17:03.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.673 "is_configured": false, 00:17:03.673 "data_offset": 2048, 00:17:03.673 "data_size": 63488 00:17:03.673 }, 00:17:03.673 { 00:17:03.673 "name": "pt2", 00:17:03.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.673 "is_configured": true, 00:17:03.673 "data_offset": 2048, 00:17:03.673 "data_size": 63488 00:17:03.673 }, 00:17:03.673 { 00:17:03.673 "name": "pt3", 00:17:03.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.673 "is_configured": true, 00:17:03.673 "data_offset": 2048, 00:17:03.673 "data_size": 63488 00:17:03.673 }, 00:17:03.673 { 00:17:03.673 "name": "pt4", 00:17:03.673 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.673 "is_configured": true, 00:17:03.673 "data_offset": 2048, 00:17:03.673 "data_size": 63488 00:17:03.673 } 00:17:03.673 ] 00:17:03.673 }' 00:17:03.673 09:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.673 09:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.944 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 [2024-11-20 09:29:29.380115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3e41ce29-aa38-4ecd-bbcc-04de4b6225a4 '!=' 3e41ce29-aa38-4ecd-bbcc-04de4b6225a4 ']' 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84538 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84538 ']' 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84538 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84538 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.206 killing process with pid 84538 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84538' 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84538 00:17:04.206 [2024-11-20 09:29:29.457739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.206 [2024-11-20 09:29:29.457864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.206 09:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84538 00:17:04.206 [2024-11-20 09:29:29.457956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.206 [2024-11-20 09:29:29.457972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:04.465 [2024-11-20 09:29:29.901599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.842 ************************************ 00:17:05.842 END TEST raid5f_superblock_test 00:17:05.842 ************************************ 00:17:05.842 09:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:05.842 00:17:05.842 real 0m8.941s 00:17:05.842 user 0m14.020s 00:17:05.842 sys 0m1.608s 00:17:05.842 09:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.842 09:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.842 09:29:31 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:05.842 09:29:31 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:05.842 09:29:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:05.842 09:29:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.842 09:29:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.842 ************************************ 00:17:05.842 START TEST raid5f_rebuild_test 00:17:05.842 ************************************ 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85029 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85029 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85029 ']' 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.842 09:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.842 [2024-11-20 09:29:31.249822] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:17:05.842 [2024-11-20 09:29:31.250049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:05.842 Zero copy mechanism will not be used. 00:17:05.842 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85029 ] 00:17:06.100 [2024-11-20 09:29:31.405207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.100 [2024-11-20 09:29:31.525423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.359 [2024-11-20 09:29:31.734794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.359 [2024-11-20 09:29:31.734916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.926 BaseBdev1_malloc 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.926 [2024-11-20 09:29:32.154673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.926 [2024-11-20 09:29:32.154744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.926 [2024-11-20 09:29:32.154771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.926 [2024-11-20 09:29:32.154784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.926 [2024-11-20 09:29:32.157045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.926 [2024-11-20 09:29:32.157086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.926 BaseBdev1 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.926 BaseBdev2_malloc 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.926 [2024-11-20 09:29:32.211634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.926 [2024-11-20 09:29:32.211701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.926 [2024-11-20 09:29:32.211723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.926 [2024-11-20 09:29:32.211734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.926 [2024-11-20 09:29:32.213900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.926 [2024-11-20 09:29:32.214001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.926 BaseBdev2 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.926 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.926 BaseBdev3_malloc 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.927 [2024-11-20 09:29:32.280172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:06.927 [2024-11-20 09:29:32.280301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.927 [2024-11-20 09:29:32.280334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.927 [2024-11-20 09:29:32.280348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.927 [2024-11-20 09:29:32.282590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.927 [2024-11-20 09:29:32.282629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:06.927 BaseBdev3 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.927 BaseBdev4_malloc 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.927 [2024-11-20 09:29:32.336574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:06.927 [2024-11-20 09:29:32.336672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.927 [2024-11-20 09:29:32.336696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:06.927 [2024-11-20 09:29:32.336707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.927 [2024-11-20 09:29:32.338765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.927 [2024-11-20 09:29:32.338804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:06.927 BaseBdev4 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.927 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 spare_malloc 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 spare_delay 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 [2024-11-20 09:29:32.408158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.185 [2024-11-20 09:29:32.408283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.185 [2024-11-20 09:29:32.408311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:07.185 [2024-11-20 09:29:32.408322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.185 [2024-11-20 09:29:32.410394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.185 [2024-11-20 09:29:32.410499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.185 spare 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 [2024-11-20 09:29:32.420182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.185 [2024-11-20 09:29:32.422005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.185 [2024-11-20 09:29:32.422113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.185 [2024-11-20 09:29:32.422198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:07.185 [2024-11-20 09:29:32.422322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.185 [2024-11-20 09:29:32.422369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:07.185 [2024-11-20 09:29:32.422643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:07.185 [2024-11-20 09:29:32.430062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.185 [2024-11-20 09:29:32.430114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.185 [2024-11-20 09:29:32.430366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.185 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.185 "name": "raid_bdev1", 00:17:07.185 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:07.185 "strip_size_kb": 64, 00:17:07.185 "state": "online", 00:17:07.185 "raid_level": "raid5f", 00:17:07.185 "superblock": false, 00:17:07.185 "num_base_bdevs": 4, 00:17:07.185 "num_base_bdevs_discovered": 4, 00:17:07.185 "num_base_bdevs_operational": 4, 00:17:07.185 "base_bdevs_list": [ 00:17:07.185 { 00:17:07.185 "name": "BaseBdev1", 00:17:07.185 "uuid": "fde52b65-0fc9-558f-a074-df3ce791c26e", 00:17:07.185 "is_configured": true, 00:17:07.185 "data_offset": 0, 00:17:07.185 "data_size": 65536 00:17:07.185 }, 00:17:07.185 { 00:17:07.185 "name": "BaseBdev2", 00:17:07.185 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:07.185 "is_configured": true, 00:17:07.185 "data_offset": 0, 00:17:07.185 "data_size": 65536 00:17:07.185 }, 00:17:07.185 { 00:17:07.185 "name": "BaseBdev3", 00:17:07.185 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:07.185 "is_configured": true, 00:17:07.185 "data_offset": 0, 00:17:07.185 "data_size": 65536 00:17:07.185 }, 00:17:07.185 { 00:17:07.185 "name": "BaseBdev4", 00:17:07.186 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:07.186 "is_configured": true, 00:17:07.186 "data_offset": 0, 00:17:07.186 "data_size": 65536 00:17:07.186 } 00:17:07.186 ] 00:17:07.186 }' 00:17:07.186 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.186 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.443 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.443 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.443 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.443 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:07.443 [2024-11-20 09:29:32.867092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.443 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.701 09:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:07.701 [2024-11-20 09:29:33.138425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:07.960 /dev/nbd0 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.960 1+0 records in 00:17:07.960 1+0 records out 00:17:07.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307669 s, 13.3 MB/s 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:07.960 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:08.525 512+0 records in 00:17:08.526 512+0 records out 00:17:08.526 100663296 bytes (101 MB, 96 MiB) copied, 0.540233 s, 186 MB/s 00:17:08.526 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:08.526 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:08.526 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:08.526 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:08.526 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:08.526 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.526 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:08.791 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.791 [2024-11-20 09:29:33.997892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.791 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.791 09:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.791 [2024-11-20 09:29:34.017686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.791 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.792 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.792 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.792 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.792 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.792 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.792 "name": "raid_bdev1", 00:17:08.792 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:08.792 "strip_size_kb": 64, 00:17:08.792 "state": "online", 00:17:08.792 "raid_level": "raid5f", 00:17:08.792 "superblock": false, 00:17:08.792 "num_base_bdevs": 4, 00:17:08.792 "num_base_bdevs_discovered": 3, 00:17:08.792 "num_base_bdevs_operational": 3, 00:17:08.792 "base_bdevs_list": [ 00:17:08.792 { 00:17:08.792 "name": null, 00:17:08.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.792 "is_configured": false, 00:17:08.792 "data_offset": 0, 00:17:08.792 "data_size": 65536 00:17:08.792 }, 00:17:08.792 { 00:17:08.792 "name": "BaseBdev2", 00:17:08.792 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:08.792 "is_configured": true, 00:17:08.792 "data_offset": 0, 00:17:08.792 "data_size": 65536 00:17:08.792 }, 00:17:08.792 { 00:17:08.792 "name": "BaseBdev3", 00:17:08.792 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:08.792 "is_configured": true, 00:17:08.792 "data_offset": 0, 00:17:08.792 "data_size": 65536 00:17:08.792 }, 00:17:08.792 { 00:17:08.792 "name": "BaseBdev4", 00:17:08.792 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:08.792 "is_configured": true, 00:17:08.792 "data_offset": 0, 00:17:08.792 "data_size": 65536 00:17:08.792 } 00:17:08.792 ] 00:17:08.792 }' 00:17:08.792 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.792 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.061 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.061 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.061 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.061 [2024-11-20 09:29:34.504870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.318 [2024-11-20 09:29:34.524676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:09.318 09:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.318 09:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:09.318 [2024-11-20 09:29:34.536961] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.252 "name": "raid_bdev1", 00:17:10.252 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:10.252 "strip_size_kb": 64, 00:17:10.252 "state": "online", 00:17:10.252 "raid_level": "raid5f", 00:17:10.252 "superblock": false, 00:17:10.252 "num_base_bdevs": 4, 00:17:10.252 "num_base_bdevs_discovered": 4, 00:17:10.252 "num_base_bdevs_operational": 4, 00:17:10.252 "process": { 00:17:10.252 "type": "rebuild", 00:17:10.252 "target": "spare", 00:17:10.252 "progress": { 00:17:10.252 "blocks": 17280, 00:17:10.252 "percent": 8 00:17:10.252 } 00:17:10.252 }, 00:17:10.252 "base_bdevs_list": [ 00:17:10.252 { 00:17:10.252 "name": "spare", 00:17:10.252 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:10.252 "is_configured": true, 00:17:10.252 "data_offset": 0, 00:17:10.252 "data_size": 65536 00:17:10.252 }, 00:17:10.252 { 00:17:10.252 "name": "BaseBdev2", 00:17:10.252 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:10.252 "is_configured": true, 00:17:10.252 "data_offset": 0, 00:17:10.252 "data_size": 65536 00:17:10.252 }, 00:17:10.252 { 00:17:10.252 "name": "BaseBdev3", 00:17:10.252 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:10.252 "is_configured": true, 00:17:10.252 "data_offset": 0, 00:17:10.252 "data_size": 65536 00:17:10.252 }, 00:17:10.252 { 00:17:10.252 "name": "BaseBdev4", 00:17:10.252 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:10.252 "is_configured": true, 00:17:10.252 "data_offset": 0, 00:17:10.252 "data_size": 65536 00:17:10.252 } 00:17:10.252 ] 00:17:10.252 }' 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.252 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.252 [2024-11-20 09:29:35.648807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.511 [2024-11-20 09:29:35.746953] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.511 [2024-11-20 09:29:35.747123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.511 [2024-11-20 09:29:35.747146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.511 [2024-11-20 09:29:35.747157] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.511 "name": "raid_bdev1", 00:17:10.511 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:10.511 "strip_size_kb": 64, 00:17:10.511 "state": "online", 00:17:10.511 "raid_level": "raid5f", 00:17:10.511 "superblock": false, 00:17:10.511 "num_base_bdevs": 4, 00:17:10.511 "num_base_bdevs_discovered": 3, 00:17:10.511 "num_base_bdevs_operational": 3, 00:17:10.511 "base_bdevs_list": [ 00:17:10.511 { 00:17:10.511 "name": null, 00:17:10.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.511 "is_configured": false, 00:17:10.511 "data_offset": 0, 00:17:10.511 "data_size": 65536 00:17:10.511 }, 00:17:10.511 { 00:17:10.511 "name": "BaseBdev2", 00:17:10.511 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:10.511 "is_configured": true, 00:17:10.511 "data_offset": 0, 00:17:10.511 "data_size": 65536 00:17:10.511 }, 00:17:10.511 { 00:17:10.511 "name": "BaseBdev3", 00:17:10.511 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:10.511 "is_configured": true, 00:17:10.511 "data_offset": 0, 00:17:10.511 "data_size": 65536 00:17:10.511 }, 00:17:10.511 { 00:17:10.511 "name": "BaseBdev4", 00:17:10.511 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:10.511 "is_configured": true, 00:17:10.511 "data_offset": 0, 00:17:10.511 "data_size": 65536 00:17:10.511 } 00:17:10.511 ] 00:17:10.511 }' 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.511 09:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.078 "name": "raid_bdev1", 00:17:11.078 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:11.078 "strip_size_kb": 64, 00:17:11.078 "state": "online", 00:17:11.078 "raid_level": "raid5f", 00:17:11.078 "superblock": false, 00:17:11.078 "num_base_bdevs": 4, 00:17:11.078 "num_base_bdevs_discovered": 3, 00:17:11.078 "num_base_bdevs_operational": 3, 00:17:11.078 "base_bdevs_list": [ 00:17:11.078 { 00:17:11.078 "name": null, 00:17:11.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.078 "is_configured": false, 00:17:11.078 "data_offset": 0, 00:17:11.078 "data_size": 65536 00:17:11.078 }, 00:17:11.078 { 00:17:11.078 "name": "BaseBdev2", 00:17:11.078 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:11.078 "is_configured": true, 00:17:11.078 "data_offset": 0, 00:17:11.078 "data_size": 65536 00:17:11.078 }, 00:17:11.078 { 00:17:11.078 "name": "BaseBdev3", 00:17:11.078 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:11.078 "is_configured": true, 00:17:11.078 "data_offset": 0, 00:17:11.078 "data_size": 65536 00:17:11.078 }, 00:17:11.078 { 00:17:11.078 "name": "BaseBdev4", 00:17:11.078 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:11.078 "is_configured": true, 00:17:11.078 "data_offset": 0, 00:17:11.078 "data_size": 65536 00:17:11.078 } 00:17:11.078 ] 00:17:11.078 }' 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.078 [2024-11-20 09:29:36.384034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.078 [2024-11-20 09:29:36.403000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.078 09:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:11.078 [2024-11-20 09:29:36.414561] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.011 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.011 "name": "raid_bdev1", 00:17:12.011 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:12.011 "strip_size_kb": 64, 00:17:12.011 "state": "online", 00:17:12.011 "raid_level": "raid5f", 00:17:12.011 "superblock": false, 00:17:12.011 "num_base_bdevs": 4, 00:17:12.011 "num_base_bdevs_discovered": 4, 00:17:12.011 "num_base_bdevs_operational": 4, 00:17:12.011 "process": { 00:17:12.011 "type": "rebuild", 00:17:12.011 "target": "spare", 00:17:12.011 "progress": { 00:17:12.011 "blocks": 17280, 00:17:12.011 "percent": 8 00:17:12.011 } 00:17:12.011 }, 00:17:12.011 "base_bdevs_list": [ 00:17:12.011 { 00:17:12.011 "name": "spare", 00:17:12.011 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:12.011 "is_configured": true, 00:17:12.011 "data_offset": 0, 00:17:12.011 "data_size": 65536 00:17:12.011 }, 00:17:12.011 { 00:17:12.011 "name": "BaseBdev2", 00:17:12.011 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:12.011 "is_configured": true, 00:17:12.011 "data_offset": 0, 00:17:12.011 "data_size": 65536 00:17:12.011 }, 00:17:12.011 { 00:17:12.011 "name": "BaseBdev3", 00:17:12.011 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:12.011 "is_configured": true, 00:17:12.011 "data_offset": 0, 00:17:12.011 "data_size": 65536 00:17:12.011 }, 00:17:12.011 { 00:17:12.011 "name": "BaseBdev4", 00:17:12.011 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:12.011 "is_configured": true, 00:17:12.011 "data_offset": 0, 00:17:12.011 "data_size": 65536 00:17:12.011 } 00:17:12.011 ] 00:17:12.011 }' 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=652 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.269 "name": "raid_bdev1", 00:17:12.269 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:12.269 "strip_size_kb": 64, 00:17:12.269 "state": "online", 00:17:12.269 "raid_level": "raid5f", 00:17:12.269 "superblock": false, 00:17:12.269 "num_base_bdevs": 4, 00:17:12.269 "num_base_bdevs_discovered": 4, 00:17:12.269 "num_base_bdevs_operational": 4, 00:17:12.269 "process": { 00:17:12.269 "type": "rebuild", 00:17:12.269 "target": "spare", 00:17:12.269 "progress": { 00:17:12.269 "blocks": 21120, 00:17:12.269 "percent": 10 00:17:12.269 } 00:17:12.269 }, 00:17:12.269 "base_bdevs_list": [ 00:17:12.269 { 00:17:12.269 "name": "spare", 00:17:12.269 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:12.269 "is_configured": true, 00:17:12.269 "data_offset": 0, 00:17:12.269 "data_size": 65536 00:17:12.269 }, 00:17:12.269 { 00:17:12.269 "name": "BaseBdev2", 00:17:12.269 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:12.269 "is_configured": true, 00:17:12.269 "data_offset": 0, 00:17:12.269 "data_size": 65536 00:17:12.269 }, 00:17:12.269 { 00:17:12.269 "name": "BaseBdev3", 00:17:12.269 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:12.269 "is_configured": true, 00:17:12.269 "data_offset": 0, 00:17:12.269 "data_size": 65536 00:17:12.269 }, 00:17:12.269 { 00:17:12.269 "name": "BaseBdev4", 00:17:12.269 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:12.269 "is_configured": true, 00:17:12.269 "data_offset": 0, 00:17:12.269 "data_size": 65536 00:17:12.269 } 00:17:12.269 ] 00:17:12.269 }' 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.269 09:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.642 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.642 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.643 "name": "raid_bdev1", 00:17:13.643 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:13.643 "strip_size_kb": 64, 00:17:13.643 "state": "online", 00:17:13.643 "raid_level": "raid5f", 00:17:13.643 "superblock": false, 00:17:13.643 "num_base_bdevs": 4, 00:17:13.643 "num_base_bdevs_discovered": 4, 00:17:13.643 "num_base_bdevs_operational": 4, 00:17:13.643 "process": { 00:17:13.643 "type": "rebuild", 00:17:13.643 "target": "spare", 00:17:13.643 "progress": { 00:17:13.643 "blocks": 42240, 00:17:13.643 "percent": 21 00:17:13.643 } 00:17:13.643 }, 00:17:13.643 "base_bdevs_list": [ 00:17:13.643 { 00:17:13.643 "name": "spare", 00:17:13.643 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:13.643 "is_configured": true, 00:17:13.643 "data_offset": 0, 00:17:13.643 "data_size": 65536 00:17:13.643 }, 00:17:13.643 { 00:17:13.643 "name": "BaseBdev2", 00:17:13.643 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:13.643 "is_configured": true, 00:17:13.643 "data_offset": 0, 00:17:13.643 "data_size": 65536 00:17:13.643 }, 00:17:13.643 { 00:17:13.643 "name": "BaseBdev3", 00:17:13.643 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:13.643 "is_configured": true, 00:17:13.643 "data_offset": 0, 00:17:13.643 "data_size": 65536 00:17:13.643 }, 00:17:13.643 { 00:17:13.643 "name": "BaseBdev4", 00:17:13.643 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:13.643 "is_configured": true, 00:17:13.643 "data_offset": 0, 00:17:13.643 "data_size": 65536 00:17:13.643 } 00:17:13.643 ] 00:17:13.643 }' 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.643 09:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.575 "name": "raid_bdev1", 00:17:14.575 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:14.575 "strip_size_kb": 64, 00:17:14.575 "state": "online", 00:17:14.575 "raid_level": "raid5f", 00:17:14.575 "superblock": false, 00:17:14.575 "num_base_bdevs": 4, 00:17:14.575 "num_base_bdevs_discovered": 4, 00:17:14.575 "num_base_bdevs_operational": 4, 00:17:14.575 "process": { 00:17:14.575 "type": "rebuild", 00:17:14.575 "target": "spare", 00:17:14.575 "progress": { 00:17:14.575 "blocks": 65280, 00:17:14.575 "percent": 33 00:17:14.575 } 00:17:14.575 }, 00:17:14.575 "base_bdevs_list": [ 00:17:14.575 { 00:17:14.575 "name": "spare", 00:17:14.575 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:14.575 "is_configured": true, 00:17:14.575 "data_offset": 0, 00:17:14.575 "data_size": 65536 00:17:14.575 }, 00:17:14.575 { 00:17:14.575 "name": "BaseBdev2", 00:17:14.575 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:14.575 "is_configured": true, 00:17:14.575 "data_offset": 0, 00:17:14.575 "data_size": 65536 00:17:14.575 }, 00:17:14.575 { 00:17:14.575 "name": "BaseBdev3", 00:17:14.575 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:14.575 "is_configured": true, 00:17:14.575 "data_offset": 0, 00:17:14.575 "data_size": 65536 00:17:14.575 }, 00:17:14.575 { 00:17:14.575 "name": "BaseBdev4", 00:17:14.575 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:14.575 "is_configured": true, 00:17:14.575 "data_offset": 0, 00:17:14.575 "data_size": 65536 00:17:14.575 } 00:17:14.575 ] 00:17:14.575 }' 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.575 09:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.949 09:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.949 09:29:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.949 09:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.949 "name": "raid_bdev1", 00:17:15.949 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:15.949 "strip_size_kb": 64, 00:17:15.949 "state": "online", 00:17:15.949 "raid_level": "raid5f", 00:17:15.949 "superblock": false, 00:17:15.949 "num_base_bdevs": 4, 00:17:15.949 "num_base_bdevs_discovered": 4, 00:17:15.949 "num_base_bdevs_operational": 4, 00:17:15.949 "process": { 00:17:15.949 "type": "rebuild", 00:17:15.949 "target": "spare", 00:17:15.949 "progress": { 00:17:15.949 "blocks": 86400, 00:17:15.949 "percent": 43 00:17:15.949 } 00:17:15.949 }, 00:17:15.949 "base_bdevs_list": [ 00:17:15.949 { 00:17:15.949 "name": "spare", 00:17:15.949 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:15.949 "is_configured": true, 00:17:15.949 "data_offset": 0, 00:17:15.949 "data_size": 65536 00:17:15.949 }, 00:17:15.949 { 00:17:15.949 "name": "BaseBdev2", 00:17:15.949 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:15.949 "is_configured": true, 00:17:15.949 "data_offset": 0, 00:17:15.949 "data_size": 65536 00:17:15.949 }, 00:17:15.949 { 00:17:15.949 "name": "BaseBdev3", 00:17:15.949 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:15.949 "is_configured": true, 00:17:15.949 "data_offset": 0, 00:17:15.949 "data_size": 65536 00:17:15.949 }, 00:17:15.949 { 00:17:15.949 "name": "BaseBdev4", 00:17:15.949 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:15.949 "is_configured": true, 00:17:15.949 "data_offset": 0, 00:17:15.949 "data_size": 65536 00:17:15.949 } 00:17:15.949 ] 00:17:15.949 }' 00:17:15.949 09:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.949 09:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.949 09:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.949 09:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.949 09:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.884 "name": "raid_bdev1", 00:17:16.884 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:16.884 "strip_size_kb": 64, 00:17:16.884 "state": "online", 00:17:16.884 "raid_level": "raid5f", 00:17:16.884 "superblock": false, 00:17:16.884 "num_base_bdevs": 4, 00:17:16.884 "num_base_bdevs_discovered": 4, 00:17:16.884 "num_base_bdevs_operational": 4, 00:17:16.884 "process": { 00:17:16.884 "type": "rebuild", 00:17:16.884 "target": "spare", 00:17:16.884 "progress": { 00:17:16.884 "blocks": 107520, 00:17:16.884 "percent": 54 00:17:16.884 } 00:17:16.884 }, 00:17:16.884 "base_bdevs_list": [ 00:17:16.884 { 00:17:16.884 "name": "spare", 00:17:16.884 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:16.884 "is_configured": true, 00:17:16.884 "data_offset": 0, 00:17:16.884 "data_size": 65536 00:17:16.884 }, 00:17:16.884 { 00:17:16.884 "name": "BaseBdev2", 00:17:16.884 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:16.884 "is_configured": true, 00:17:16.884 "data_offset": 0, 00:17:16.884 "data_size": 65536 00:17:16.884 }, 00:17:16.884 { 00:17:16.884 "name": "BaseBdev3", 00:17:16.884 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:16.884 "is_configured": true, 00:17:16.884 "data_offset": 0, 00:17:16.884 "data_size": 65536 00:17:16.884 }, 00:17:16.884 { 00:17:16.884 "name": "BaseBdev4", 00:17:16.884 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:16.884 "is_configured": true, 00:17:16.884 "data_offset": 0, 00:17:16.884 "data_size": 65536 00:17:16.884 } 00:17:16.884 ] 00:17:16.884 }' 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.884 09:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.817 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.075 09:29:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.075 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.075 "name": "raid_bdev1", 00:17:18.075 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:18.075 "strip_size_kb": 64, 00:17:18.075 "state": "online", 00:17:18.075 "raid_level": "raid5f", 00:17:18.075 "superblock": false, 00:17:18.075 "num_base_bdevs": 4, 00:17:18.075 "num_base_bdevs_discovered": 4, 00:17:18.075 "num_base_bdevs_operational": 4, 00:17:18.075 "process": { 00:17:18.075 "type": "rebuild", 00:17:18.075 "target": "spare", 00:17:18.075 "progress": { 00:17:18.075 "blocks": 128640, 00:17:18.075 "percent": 65 00:17:18.075 } 00:17:18.075 }, 00:17:18.075 "base_bdevs_list": [ 00:17:18.075 { 00:17:18.075 "name": "spare", 00:17:18.075 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:18.075 "is_configured": true, 00:17:18.075 "data_offset": 0, 00:17:18.075 "data_size": 65536 00:17:18.075 }, 00:17:18.075 { 00:17:18.075 "name": "BaseBdev2", 00:17:18.075 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:18.075 "is_configured": true, 00:17:18.075 "data_offset": 0, 00:17:18.075 "data_size": 65536 00:17:18.075 }, 00:17:18.075 { 00:17:18.075 "name": "BaseBdev3", 00:17:18.075 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:18.075 "is_configured": true, 00:17:18.075 "data_offset": 0, 00:17:18.075 "data_size": 65536 00:17:18.075 }, 00:17:18.075 { 00:17:18.075 "name": "BaseBdev4", 00:17:18.075 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:18.075 "is_configured": true, 00:17:18.075 "data_offset": 0, 00:17:18.075 "data_size": 65536 00:17:18.075 } 00:17:18.075 ] 00:17:18.075 }' 00:17:18.075 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.075 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.075 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.075 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.075 09:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.010 "name": "raid_bdev1", 00:17:19.010 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:19.010 "strip_size_kb": 64, 00:17:19.010 "state": "online", 00:17:19.010 "raid_level": "raid5f", 00:17:19.010 "superblock": false, 00:17:19.010 "num_base_bdevs": 4, 00:17:19.010 "num_base_bdevs_discovered": 4, 00:17:19.010 "num_base_bdevs_operational": 4, 00:17:19.010 "process": { 00:17:19.010 "type": "rebuild", 00:17:19.010 "target": "spare", 00:17:19.010 "progress": { 00:17:19.010 "blocks": 151680, 00:17:19.010 "percent": 77 00:17:19.010 } 00:17:19.010 }, 00:17:19.010 "base_bdevs_list": [ 00:17:19.010 { 00:17:19.010 "name": "spare", 00:17:19.010 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:19.010 "is_configured": true, 00:17:19.010 "data_offset": 0, 00:17:19.010 "data_size": 65536 00:17:19.010 }, 00:17:19.010 { 00:17:19.010 "name": "BaseBdev2", 00:17:19.010 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:19.010 "is_configured": true, 00:17:19.010 "data_offset": 0, 00:17:19.010 "data_size": 65536 00:17:19.010 }, 00:17:19.010 { 00:17:19.010 "name": "BaseBdev3", 00:17:19.010 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:19.010 "is_configured": true, 00:17:19.010 "data_offset": 0, 00:17:19.010 "data_size": 65536 00:17:19.010 }, 00:17:19.010 { 00:17:19.010 "name": "BaseBdev4", 00:17:19.010 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:19.010 "is_configured": true, 00:17:19.010 "data_offset": 0, 00:17:19.010 "data_size": 65536 00:17:19.010 } 00:17:19.010 ] 00:17:19.010 }' 00:17:19.010 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.269 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.269 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.269 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.269 09:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.203 "name": "raid_bdev1", 00:17:20.203 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:20.203 "strip_size_kb": 64, 00:17:20.203 "state": "online", 00:17:20.203 "raid_level": "raid5f", 00:17:20.203 "superblock": false, 00:17:20.203 "num_base_bdevs": 4, 00:17:20.203 "num_base_bdevs_discovered": 4, 00:17:20.203 "num_base_bdevs_operational": 4, 00:17:20.203 "process": { 00:17:20.203 "type": "rebuild", 00:17:20.203 "target": "spare", 00:17:20.203 "progress": { 00:17:20.203 "blocks": 172800, 00:17:20.203 "percent": 87 00:17:20.203 } 00:17:20.203 }, 00:17:20.203 "base_bdevs_list": [ 00:17:20.203 { 00:17:20.203 "name": "spare", 00:17:20.203 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:20.203 "is_configured": true, 00:17:20.203 "data_offset": 0, 00:17:20.203 "data_size": 65536 00:17:20.203 }, 00:17:20.203 { 00:17:20.203 "name": "BaseBdev2", 00:17:20.203 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:20.203 "is_configured": true, 00:17:20.203 "data_offset": 0, 00:17:20.203 "data_size": 65536 00:17:20.203 }, 00:17:20.203 { 00:17:20.203 "name": "BaseBdev3", 00:17:20.203 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:20.203 "is_configured": true, 00:17:20.203 "data_offset": 0, 00:17:20.203 "data_size": 65536 00:17:20.203 }, 00:17:20.203 { 00:17:20.203 "name": "BaseBdev4", 00:17:20.203 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:20.203 "is_configured": true, 00:17:20.203 "data_offset": 0, 00:17:20.203 "data_size": 65536 00:17:20.203 } 00:17:20.203 ] 00:17:20.203 }' 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.203 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.462 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.462 09:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.398 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.398 "name": "raid_bdev1", 00:17:21.398 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:21.398 "strip_size_kb": 64, 00:17:21.398 "state": "online", 00:17:21.398 "raid_level": "raid5f", 00:17:21.398 "superblock": false, 00:17:21.398 "num_base_bdevs": 4, 00:17:21.399 "num_base_bdevs_discovered": 4, 00:17:21.399 "num_base_bdevs_operational": 4, 00:17:21.399 "process": { 00:17:21.399 "type": "rebuild", 00:17:21.399 "target": "spare", 00:17:21.399 "progress": { 00:17:21.399 "blocks": 195840, 00:17:21.399 "percent": 99 00:17:21.399 } 00:17:21.399 }, 00:17:21.399 "base_bdevs_list": [ 00:17:21.399 { 00:17:21.399 "name": "spare", 00:17:21.399 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:21.399 "is_configured": true, 00:17:21.399 "data_offset": 0, 00:17:21.399 "data_size": 65536 00:17:21.399 }, 00:17:21.399 { 00:17:21.399 "name": "BaseBdev2", 00:17:21.399 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:21.399 "is_configured": true, 00:17:21.399 "data_offset": 0, 00:17:21.399 "data_size": 65536 00:17:21.399 }, 00:17:21.399 { 00:17:21.399 "name": "BaseBdev3", 00:17:21.399 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:21.399 "is_configured": true, 00:17:21.399 "data_offset": 0, 00:17:21.399 "data_size": 65536 00:17:21.399 }, 00:17:21.399 { 00:17:21.399 "name": "BaseBdev4", 00:17:21.399 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:21.399 "is_configured": true, 00:17:21.399 "data_offset": 0, 00:17:21.399 "data_size": 65536 00:17:21.399 } 00:17:21.399 ] 00:17:21.399 }' 00:17:21.399 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.399 [2024-11-20 09:29:46.795602] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:21.399 [2024-11-20 09:29:46.795699] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:21.399 [2024-11-20 09:29:46.795758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.399 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.399 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.657 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.657 09:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.601 "name": "raid_bdev1", 00:17:22.601 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:22.601 "strip_size_kb": 64, 00:17:22.601 "state": "online", 00:17:22.601 "raid_level": "raid5f", 00:17:22.601 "superblock": false, 00:17:22.601 "num_base_bdevs": 4, 00:17:22.601 "num_base_bdevs_discovered": 4, 00:17:22.601 "num_base_bdevs_operational": 4, 00:17:22.601 "base_bdevs_list": [ 00:17:22.601 { 00:17:22.601 "name": "spare", 00:17:22.601 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:22.601 "is_configured": true, 00:17:22.601 "data_offset": 0, 00:17:22.601 "data_size": 65536 00:17:22.601 }, 00:17:22.601 { 00:17:22.601 "name": "BaseBdev2", 00:17:22.601 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:22.601 "is_configured": true, 00:17:22.601 "data_offset": 0, 00:17:22.601 "data_size": 65536 00:17:22.601 }, 00:17:22.601 { 00:17:22.601 "name": "BaseBdev3", 00:17:22.601 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:22.601 "is_configured": true, 00:17:22.601 "data_offset": 0, 00:17:22.601 "data_size": 65536 00:17:22.601 }, 00:17:22.601 { 00:17:22.601 "name": "BaseBdev4", 00:17:22.601 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:22.601 "is_configured": true, 00:17:22.601 "data_offset": 0, 00:17:22.601 "data_size": 65536 00:17:22.601 } 00:17:22.601 ] 00:17:22.601 }' 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.601 09:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.601 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.601 "name": "raid_bdev1", 00:17:22.601 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:22.601 "strip_size_kb": 64, 00:17:22.601 "state": "online", 00:17:22.601 "raid_level": "raid5f", 00:17:22.601 "superblock": false, 00:17:22.601 "num_base_bdevs": 4, 00:17:22.601 "num_base_bdevs_discovered": 4, 00:17:22.601 "num_base_bdevs_operational": 4, 00:17:22.601 "base_bdevs_list": [ 00:17:22.601 { 00:17:22.601 "name": "spare", 00:17:22.601 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:22.601 "is_configured": true, 00:17:22.601 "data_offset": 0, 00:17:22.601 "data_size": 65536 00:17:22.601 }, 00:17:22.601 { 00:17:22.601 "name": "BaseBdev2", 00:17:22.601 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:22.601 "is_configured": true, 00:17:22.601 "data_offset": 0, 00:17:22.601 "data_size": 65536 00:17:22.601 }, 00:17:22.601 { 00:17:22.601 "name": "BaseBdev3", 00:17:22.601 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:22.601 "is_configured": true, 00:17:22.601 "data_offset": 0, 00:17:22.601 "data_size": 65536 00:17:22.601 }, 00:17:22.601 { 00:17:22.601 "name": "BaseBdev4", 00:17:22.601 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:22.601 "is_configured": true, 00:17:22.601 "data_offset": 0, 00:17:22.601 "data_size": 65536 00:17:22.601 } 00:17:22.601 ] 00:17:22.601 }' 00:17:22.601 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.859 "name": "raid_bdev1", 00:17:22.859 "uuid": "358a7a93-5c15-4e2b-9b96-2faf31f4f845", 00:17:22.859 "strip_size_kb": 64, 00:17:22.859 "state": "online", 00:17:22.859 "raid_level": "raid5f", 00:17:22.859 "superblock": false, 00:17:22.859 "num_base_bdevs": 4, 00:17:22.859 "num_base_bdevs_discovered": 4, 00:17:22.859 "num_base_bdevs_operational": 4, 00:17:22.859 "base_bdevs_list": [ 00:17:22.859 { 00:17:22.859 "name": "spare", 00:17:22.859 "uuid": "e4c0d895-8a19-523f-b769-402d226d62b9", 00:17:22.859 "is_configured": true, 00:17:22.859 "data_offset": 0, 00:17:22.859 "data_size": 65536 00:17:22.859 }, 00:17:22.859 { 00:17:22.859 "name": "BaseBdev2", 00:17:22.859 "uuid": "66107b57-18cb-5afa-8f27-5dbb3cdea2d7", 00:17:22.859 "is_configured": true, 00:17:22.859 "data_offset": 0, 00:17:22.859 "data_size": 65536 00:17:22.859 }, 00:17:22.859 { 00:17:22.859 "name": "BaseBdev3", 00:17:22.859 "uuid": "2eabef4a-cc53-50e3-bc99-5bd59338fd73", 00:17:22.859 "is_configured": true, 00:17:22.859 "data_offset": 0, 00:17:22.859 "data_size": 65536 00:17:22.859 }, 00:17:22.859 { 00:17:22.859 "name": "BaseBdev4", 00:17:22.859 "uuid": "b70edb8f-73b8-59b2-b1ad-e9708267bc48", 00:17:22.859 "is_configured": true, 00:17:22.859 "data_offset": 0, 00:17:22.859 "data_size": 65536 00:17:22.859 } 00:17:22.859 ] 00:17:22.859 }' 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.859 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.115 [2024-11-20 09:29:48.520796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.115 [2024-11-20 09:29:48.520842] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.115 [2024-11-20 09:29:48.520960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.115 [2024-11-20 09:29:48.521087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.115 [2024-11-20 09:29:48.521105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.115 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:23.373 /dev/nbd0 00:17:23.373 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.630 1+0 records in 00:17:23.630 1+0 records out 00:17:23.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433509 s, 9.4 MB/s 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.630 09:29:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:23.630 /dev/nbd1 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.887 1+0 records in 00:17:23.887 1+0 records out 00:17:23.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381401 s, 10.7 MB/s 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.887 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:24.144 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:24.144 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.144 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:24.145 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.145 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:24.145 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.145 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.145 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.404 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85029 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85029 ']' 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85029 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85029 00:17:24.662 killing process with pid 85029 00:17:24.662 Received shutdown signal, test time was about 60.000000 seconds 00:17:24.662 00:17:24.662 Latency(us) 00:17:24.662 [2024-11-20T09:29:50.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.662 [2024-11-20T09:29:50.118Z] =================================================================================================================== 00:17:24.662 [2024-11-20T09:29:50.118Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85029' 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85029 00:17:24.662 [2024-11-20 09:29:49.899196] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.662 09:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85029 00:17:25.268 [2024-11-20 09:29:50.469212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:26.226 09:29:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:26.226 00:17:26.226 real 0m20.467s 00:17:26.226 user 0m24.408s 00:17:26.226 sys 0m2.353s 00:17:26.226 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.226 ************************************ 00:17:26.226 END TEST raid5f_rebuild_test 00:17:26.226 ************************************ 00:17:26.226 09:29:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.226 09:29:51 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:26.226 09:29:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:26.226 09:29:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.226 09:29:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.483 ************************************ 00:17:26.483 START TEST raid5f_rebuild_test_sb 00:17:26.483 ************************************ 00:17:26.483 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:26.483 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:26.483 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:26.483 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:26.483 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:26.483 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85553 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85553 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85553 ']' 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.484 09:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.484 [2024-11-20 09:29:51.785693] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:17:26.484 [2024-11-20 09:29:51.785918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:26.484 Zero copy mechanism will not be used. 00:17:26.484 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85553 ] 00:17:26.742 [2024-11-20 09:29:51.962351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.742 [2024-11-20 09:29:52.080022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.000 [2024-11-20 09:29:52.299079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.000 [2024-11-20 09:29:52.299245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.567 BaseBdev1_malloc 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.567 [2024-11-20 09:29:52.772738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:27.567 [2024-11-20 09:29:52.772839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.567 [2024-11-20 09:29:52.772867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:27.567 [2024-11-20 09:29:52.772879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.567 [2024-11-20 09:29:52.775142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.567 [2024-11-20 09:29:52.775206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:27.567 BaseBdev1 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.567 BaseBdev2_malloc 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.567 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.567 [2024-11-20 09:29:52.828485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:27.568 [2024-11-20 09:29:52.828566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.568 [2024-11-20 09:29:52.828587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:27.568 [2024-11-20 09:29:52.828600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.568 [2024-11-20 09:29:52.830795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.568 [2024-11-20 09:29:52.830835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:27.568 BaseBdev2 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.568 BaseBdev3_malloc 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.568 [2024-11-20 09:29:52.895308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:27.568 [2024-11-20 09:29:52.895383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.568 [2024-11-20 09:29:52.895410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:27.568 [2024-11-20 09:29:52.895424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.568 [2024-11-20 09:29:52.897772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.568 [2024-11-20 09:29:52.897814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:27.568 BaseBdev3 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.568 BaseBdev4_malloc 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.568 [2024-11-20 09:29:52.953092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:27.568 [2024-11-20 09:29:52.953163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.568 [2024-11-20 09:29:52.953201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:27.568 [2024-11-20 09:29:52.953214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.568 [2024-11-20 09:29:52.955652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.568 [2024-11-20 09:29:52.955706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:27.568 BaseBdev4 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.568 09:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.568 spare_malloc 00:17:27.568 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.568 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:27.568 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.568 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.568 spare_delay 00:17:27.568 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.568 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:27.568 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.568 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.827 [2024-11-20 09:29:53.023646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:27.827 [2024-11-20 09:29:53.023724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.827 [2024-11-20 09:29:53.023751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:27.827 [2024-11-20 09:29:53.023764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.827 [2024-11-20 09:29:53.026371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.827 [2024-11-20 09:29:53.026422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:27.827 spare 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.827 [2024-11-20 09:29:53.035684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.827 [2024-11-20 09:29:53.037829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.827 [2024-11-20 09:29:53.037924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:27.827 [2024-11-20 09:29:53.037986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:27.827 [2024-11-20 09:29:53.038210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:27.827 [2024-11-20 09:29:53.038239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:27.827 [2024-11-20 09:29:53.038574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:27.827 [2024-11-20 09:29:53.047527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:27.827 [2024-11-20 09:29:53.047557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:27.827 [2024-11-20 09:29:53.047839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.827 "name": "raid_bdev1", 00:17:27.827 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:27.827 "strip_size_kb": 64, 00:17:27.827 "state": "online", 00:17:27.827 "raid_level": "raid5f", 00:17:27.827 "superblock": true, 00:17:27.827 "num_base_bdevs": 4, 00:17:27.827 "num_base_bdevs_discovered": 4, 00:17:27.827 "num_base_bdevs_operational": 4, 00:17:27.827 "base_bdevs_list": [ 00:17:27.827 { 00:17:27.827 "name": "BaseBdev1", 00:17:27.827 "uuid": "a4f05ff0-3431-52ac-b338-c2f272bb3ef6", 00:17:27.827 "is_configured": true, 00:17:27.827 "data_offset": 2048, 00:17:27.827 "data_size": 63488 00:17:27.827 }, 00:17:27.827 { 00:17:27.827 "name": "BaseBdev2", 00:17:27.827 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:27.827 "is_configured": true, 00:17:27.827 "data_offset": 2048, 00:17:27.827 "data_size": 63488 00:17:27.827 }, 00:17:27.827 { 00:17:27.827 "name": "BaseBdev3", 00:17:27.827 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:27.827 "is_configured": true, 00:17:27.827 "data_offset": 2048, 00:17:27.827 "data_size": 63488 00:17:27.827 }, 00:17:27.827 { 00:17:27.827 "name": "BaseBdev4", 00:17:27.827 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:27.827 "is_configured": true, 00:17:27.827 "data_offset": 2048, 00:17:27.827 "data_size": 63488 00:17:27.827 } 00:17:27.827 ] 00:17:27.827 }' 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.827 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.085 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:28.085 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.085 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.085 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:28.085 [2024-11-20 09:29:53.489745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.085 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.085 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:28.085 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:28.085 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:28.342 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:28.599 [2024-11-20 09:29:53.820961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:28.599 /dev/nbd0 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.599 1+0 records in 00:17:28.599 1+0 records out 00:17:28.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455298 s, 9.0 MB/s 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:28.599 09:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:29.165 496+0 records in 00:17:29.165 496+0 records out 00:17:29.165 97517568 bytes (98 MB, 93 MiB) copied, 0.503633 s, 194 MB/s 00:17:29.165 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:29.165 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:29.165 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:29.165 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:29.165 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:29.165 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.165 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:29.423 [2024-11-20 09:29:54.648938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.423 [2024-11-20 09:29:54.668645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.423 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.423 "name": "raid_bdev1", 00:17:29.423 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:29.423 "strip_size_kb": 64, 00:17:29.423 "state": "online", 00:17:29.423 "raid_level": "raid5f", 00:17:29.423 "superblock": true, 00:17:29.423 "num_base_bdevs": 4, 00:17:29.423 "num_base_bdevs_discovered": 3, 00:17:29.423 "num_base_bdevs_operational": 3, 00:17:29.423 "base_bdevs_list": [ 00:17:29.423 { 00:17:29.423 "name": null, 00:17:29.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.423 "is_configured": false, 00:17:29.423 "data_offset": 0, 00:17:29.423 "data_size": 63488 00:17:29.423 }, 00:17:29.423 { 00:17:29.423 "name": "BaseBdev2", 00:17:29.423 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:29.423 "is_configured": true, 00:17:29.423 "data_offset": 2048, 00:17:29.423 "data_size": 63488 00:17:29.424 }, 00:17:29.424 { 00:17:29.424 "name": "BaseBdev3", 00:17:29.424 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:29.424 "is_configured": true, 00:17:29.424 "data_offset": 2048, 00:17:29.424 "data_size": 63488 00:17:29.424 }, 00:17:29.424 { 00:17:29.424 "name": "BaseBdev4", 00:17:29.424 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:29.424 "is_configured": true, 00:17:29.424 "data_offset": 2048, 00:17:29.424 "data_size": 63488 00:17:29.424 } 00:17:29.424 ] 00:17:29.424 }' 00:17:29.424 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.424 09:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.682 09:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:29.682 09:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.682 09:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.940 [2024-11-20 09:29:55.139891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.940 [2024-11-20 09:29:55.160294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:29.940 09:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.940 09:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:29.940 [2024-11-20 09:29:55.172521] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.873 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.873 "name": "raid_bdev1", 00:17:30.874 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:30.874 "strip_size_kb": 64, 00:17:30.874 "state": "online", 00:17:30.874 "raid_level": "raid5f", 00:17:30.874 "superblock": true, 00:17:30.874 "num_base_bdevs": 4, 00:17:30.874 "num_base_bdevs_discovered": 4, 00:17:30.874 "num_base_bdevs_operational": 4, 00:17:30.874 "process": { 00:17:30.874 "type": "rebuild", 00:17:30.874 "target": "spare", 00:17:30.874 "progress": { 00:17:30.874 "blocks": 17280, 00:17:30.874 "percent": 9 00:17:30.874 } 00:17:30.874 }, 00:17:30.874 "base_bdevs_list": [ 00:17:30.874 { 00:17:30.874 "name": "spare", 00:17:30.874 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:30.874 "is_configured": true, 00:17:30.874 "data_offset": 2048, 00:17:30.874 "data_size": 63488 00:17:30.874 }, 00:17:30.874 { 00:17:30.874 "name": "BaseBdev2", 00:17:30.874 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:30.874 "is_configured": true, 00:17:30.874 "data_offset": 2048, 00:17:30.874 "data_size": 63488 00:17:30.874 }, 00:17:30.874 { 00:17:30.874 "name": "BaseBdev3", 00:17:30.874 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:30.874 "is_configured": true, 00:17:30.874 "data_offset": 2048, 00:17:30.874 "data_size": 63488 00:17:30.874 }, 00:17:30.874 { 00:17:30.874 "name": "BaseBdev4", 00:17:30.874 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:30.874 "is_configured": true, 00:17:30.874 "data_offset": 2048, 00:17:30.874 "data_size": 63488 00:17:30.874 } 00:17:30.874 ] 00:17:30.874 }' 00:17:30.874 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.874 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.874 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.874 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.874 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:30.874 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.874 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.874 [2024-11-20 09:29:56.308642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:31.131 [2024-11-20 09:29:56.383215] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:31.131 [2024-11-20 09:29:56.383323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.131 [2024-11-20 09:29:56.383345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:31.131 [2024-11-20 09:29:56.383359] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.131 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.131 "name": "raid_bdev1", 00:17:31.131 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:31.131 "strip_size_kb": 64, 00:17:31.131 "state": "online", 00:17:31.131 "raid_level": "raid5f", 00:17:31.131 "superblock": true, 00:17:31.131 "num_base_bdevs": 4, 00:17:31.131 "num_base_bdevs_discovered": 3, 00:17:31.131 "num_base_bdevs_operational": 3, 00:17:31.131 "base_bdevs_list": [ 00:17:31.131 { 00:17:31.131 "name": null, 00:17:31.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.131 "is_configured": false, 00:17:31.131 "data_offset": 0, 00:17:31.131 "data_size": 63488 00:17:31.131 }, 00:17:31.131 { 00:17:31.131 "name": "BaseBdev2", 00:17:31.131 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:31.131 "is_configured": true, 00:17:31.131 "data_offset": 2048, 00:17:31.131 "data_size": 63488 00:17:31.131 }, 00:17:31.131 { 00:17:31.131 "name": "BaseBdev3", 00:17:31.131 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:31.131 "is_configured": true, 00:17:31.131 "data_offset": 2048, 00:17:31.131 "data_size": 63488 00:17:31.131 }, 00:17:31.131 { 00:17:31.132 "name": "BaseBdev4", 00:17:31.132 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:31.132 "is_configured": true, 00:17:31.132 "data_offset": 2048, 00:17:31.132 "data_size": 63488 00:17:31.132 } 00:17:31.132 ] 00:17:31.132 }' 00:17:31.132 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.132 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.698 "name": "raid_bdev1", 00:17:31.698 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:31.698 "strip_size_kb": 64, 00:17:31.698 "state": "online", 00:17:31.698 "raid_level": "raid5f", 00:17:31.698 "superblock": true, 00:17:31.698 "num_base_bdevs": 4, 00:17:31.698 "num_base_bdevs_discovered": 3, 00:17:31.698 "num_base_bdevs_operational": 3, 00:17:31.698 "base_bdevs_list": [ 00:17:31.698 { 00:17:31.698 "name": null, 00:17:31.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.698 "is_configured": false, 00:17:31.698 "data_offset": 0, 00:17:31.698 "data_size": 63488 00:17:31.698 }, 00:17:31.698 { 00:17:31.698 "name": "BaseBdev2", 00:17:31.698 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:31.698 "is_configured": true, 00:17:31.698 "data_offset": 2048, 00:17:31.698 "data_size": 63488 00:17:31.698 }, 00:17:31.698 { 00:17:31.698 "name": "BaseBdev3", 00:17:31.698 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:31.698 "is_configured": true, 00:17:31.698 "data_offset": 2048, 00:17:31.698 "data_size": 63488 00:17:31.698 }, 00:17:31.698 { 00:17:31.698 "name": "BaseBdev4", 00:17:31.698 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:31.698 "is_configured": true, 00:17:31.698 "data_offset": 2048, 00:17:31.698 "data_size": 63488 00:17:31.698 } 00:17:31.698 ] 00:17:31.698 }' 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.698 09:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.698 09:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.698 09:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:31.698 09:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.698 09:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.698 [2024-11-20 09:29:57.008302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:31.698 [2024-11-20 09:29:57.026807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:31.698 09:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.698 09:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:31.698 [2024-11-20 09:29:57.039705] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:32.631 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.631 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.632 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.632 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.632 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.632 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.632 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.632 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.632 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.632 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.890 "name": "raid_bdev1", 00:17:32.890 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:32.890 "strip_size_kb": 64, 00:17:32.890 "state": "online", 00:17:32.890 "raid_level": "raid5f", 00:17:32.890 "superblock": true, 00:17:32.890 "num_base_bdevs": 4, 00:17:32.890 "num_base_bdevs_discovered": 4, 00:17:32.890 "num_base_bdevs_operational": 4, 00:17:32.890 "process": { 00:17:32.890 "type": "rebuild", 00:17:32.890 "target": "spare", 00:17:32.890 "progress": { 00:17:32.890 "blocks": 17280, 00:17:32.890 "percent": 9 00:17:32.890 } 00:17:32.890 }, 00:17:32.890 "base_bdevs_list": [ 00:17:32.890 { 00:17:32.890 "name": "spare", 00:17:32.890 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:32.890 "is_configured": true, 00:17:32.890 "data_offset": 2048, 00:17:32.890 "data_size": 63488 00:17:32.890 }, 00:17:32.890 { 00:17:32.890 "name": "BaseBdev2", 00:17:32.890 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:32.890 "is_configured": true, 00:17:32.890 "data_offset": 2048, 00:17:32.890 "data_size": 63488 00:17:32.890 }, 00:17:32.890 { 00:17:32.890 "name": "BaseBdev3", 00:17:32.890 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:32.890 "is_configured": true, 00:17:32.890 "data_offset": 2048, 00:17:32.890 "data_size": 63488 00:17:32.890 }, 00:17:32.890 { 00:17:32.890 "name": "BaseBdev4", 00:17:32.890 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:32.890 "is_configured": true, 00:17:32.890 "data_offset": 2048, 00:17:32.890 "data_size": 63488 00:17:32.890 } 00:17:32.890 ] 00:17:32.890 }' 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:32.890 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=673 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.890 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.890 "name": "raid_bdev1", 00:17:32.890 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:32.890 "strip_size_kb": 64, 00:17:32.890 "state": "online", 00:17:32.890 "raid_level": "raid5f", 00:17:32.890 "superblock": true, 00:17:32.890 "num_base_bdevs": 4, 00:17:32.890 "num_base_bdevs_discovered": 4, 00:17:32.890 "num_base_bdevs_operational": 4, 00:17:32.890 "process": { 00:17:32.890 "type": "rebuild", 00:17:32.890 "target": "spare", 00:17:32.890 "progress": { 00:17:32.890 "blocks": 21120, 00:17:32.890 "percent": 11 00:17:32.890 } 00:17:32.890 }, 00:17:32.890 "base_bdevs_list": [ 00:17:32.890 { 00:17:32.890 "name": "spare", 00:17:32.890 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:32.890 "is_configured": true, 00:17:32.890 "data_offset": 2048, 00:17:32.890 "data_size": 63488 00:17:32.890 }, 00:17:32.890 { 00:17:32.890 "name": "BaseBdev2", 00:17:32.890 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:32.890 "is_configured": true, 00:17:32.890 "data_offset": 2048, 00:17:32.890 "data_size": 63488 00:17:32.890 }, 00:17:32.890 { 00:17:32.890 "name": "BaseBdev3", 00:17:32.890 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:32.890 "is_configured": true, 00:17:32.890 "data_offset": 2048, 00:17:32.890 "data_size": 63488 00:17:32.890 }, 00:17:32.890 { 00:17:32.890 "name": "BaseBdev4", 00:17:32.891 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:32.891 "is_configured": true, 00:17:32.891 "data_offset": 2048, 00:17:32.891 "data_size": 63488 00:17:32.891 } 00:17:32.891 ] 00:17:32.891 }' 00:17:32.891 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.891 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.891 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.891 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.891 09:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.264 "name": "raid_bdev1", 00:17:34.264 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:34.264 "strip_size_kb": 64, 00:17:34.264 "state": "online", 00:17:34.264 "raid_level": "raid5f", 00:17:34.264 "superblock": true, 00:17:34.264 "num_base_bdevs": 4, 00:17:34.264 "num_base_bdevs_discovered": 4, 00:17:34.264 "num_base_bdevs_operational": 4, 00:17:34.264 "process": { 00:17:34.264 "type": "rebuild", 00:17:34.264 "target": "spare", 00:17:34.264 "progress": { 00:17:34.264 "blocks": 42240, 00:17:34.264 "percent": 22 00:17:34.264 } 00:17:34.264 }, 00:17:34.264 "base_bdevs_list": [ 00:17:34.264 { 00:17:34.264 "name": "spare", 00:17:34.264 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:34.264 "is_configured": true, 00:17:34.264 "data_offset": 2048, 00:17:34.264 "data_size": 63488 00:17:34.264 }, 00:17:34.264 { 00:17:34.264 "name": "BaseBdev2", 00:17:34.264 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:34.264 "is_configured": true, 00:17:34.264 "data_offset": 2048, 00:17:34.264 "data_size": 63488 00:17:34.264 }, 00:17:34.264 { 00:17:34.264 "name": "BaseBdev3", 00:17:34.264 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:34.264 "is_configured": true, 00:17:34.264 "data_offset": 2048, 00:17:34.264 "data_size": 63488 00:17:34.264 }, 00:17:34.264 { 00:17:34.264 "name": "BaseBdev4", 00:17:34.264 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:34.264 "is_configured": true, 00:17:34.264 "data_offset": 2048, 00:17:34.264 "data_size": 63488 00:17:34.264 } 00:17:34.264 ] 00:17:34.264 }' 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.264 09:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.197 "name": "raid_bdev1", 00:17:35.197 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:35.197 "strip_size_kb": 64, 00:17:35.197 "state": "online", 00:17:35.197 "raid_level": "raid5f", 00:17:35.197 "superblock": true, 00:17:35.197 "num_base_bdevs": 4, 00:17:35.197 "num_base_bdevs_discovered": 4, 00:17:35.197 "num_base_bdevs_operational": 4, 00:17:35.197 "process": { 00:17:35.197 "type": "rebuild", 00:17:35.197 "target": "spare", 00:17:35.197 "progress": { 00:17:35.197 "blocks": 63360, 00:17:35.197 "percent": 33 00:17:35.197 } 00:17:35.197 }, 00:17:35.197 "base_bdevs_list": [ 00:17:35.197 { 00:17:35.197 "name": "spare", 00:17:35.197 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:35.197 "is_configured": true, 00:17:35.197 "data_offset": 2048, 00:17:35.197 "data_size": 63488 00:17:35.197 }, 00:17:35.197 { 00:17:35.197 "name": "BaseBdev2", 00:17:35.197 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:35.197 "is_configured": true, 00:17:35.197 "data_offset": 2048, 00:17:35.197 "data_size": 63488 00:17:35.197 }, 00:17:35.197 { 00:17:35.197 "name": "BaseBdev3", 00:17:35.197 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:35.197 "is_configured": true, 00:17:35.197 "data_offset": 2048, 00:17:35.197 "data_size": 63488 00:17:35.197 }, 00:17:35.197 { 00:17:35.197 "name": "BaseBdev4", 00:17:35.197 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:35.197 "is_configured": true, 00:17:35.197 "data_offset": 2048, 00:17:35.197 "data_size": 63488 00:17:35.197 } 00:17:35.197 ] 00:17:35.197 }' 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.197 09:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.131 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.389 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.389 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.389 "name": "raid_bdev1", 00:17:36.389 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:36.389 "strip_size_kb": 64, 00:17:36.389 "state": "online", 00:17:36.389 "raid_level": "raid5f", 00:17:36.389 "superblock": true, 00:17:36.389 "num_base_bdevs": 4, 00:17:36.389 "num_base_bdevs_discovered": 4, 00:17:36.389 "num_base_bdevs_operational": 4, 00:17:36.389 "process": { 00:17:36.389 "type": "rebuild", 00:17:36.389 "target": "spare", 00:17:36.389 "progress": { 00:17:36.389 "blocks": 86400, 00:17:36.389 "percent": 45 00:17:36.389 } 00:17:36.389 }, 00:17:36.389 "base_bdevs_list": [ 00:17:36.389 { 00:17:36.389 "name": "spare", 00:17:36.389 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:36.389 "is_configured": true, 00:17:36.389 "data_offset": 2048, 00:17:36.389 "data_size": 63488 00:17:36.389 }, 00:17:36.389 { 00:17:36.389 "name": "BaseBdev2", 00:17:36.389 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:36.389 "is_configured": true, 00:17:36.389 "data_offset": 2048, 00:17:36.389 "data_size": 63488 00:17:36.389 }, 00:17:36.389 { 00:17:36.389 "name": "BaseBdev3", 00:17:36.389 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:36.389 "is_configured": true, 00:17:36.389 "data_offset": 2048, 00:17:36.389 "data_size": 63488 00:17:36.389 }, 00:17:36.389 { 00:17:36.389 "name": "BaseBdev4", 00:17:36.389 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:36.389 "is_configured": true, 00:17:36.389 "data_offset": 2048, 00:17:36.389 "data_size": 63488 00:17:36.389 } 00:17:36.389 ] 00:17:36.389 }' 00:17:36.389 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.389 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.389 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.389 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.389 09:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.322 "name": "raid_bdev1", 00:17:37.322 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:37.322 "strip_size_kb": 64, 00:17:37.322 "state": "online", 00:17:37.322 "raid_level": "raid5f", 00:17:37.322 "superblock": true, 00:17:37.322 "num_base_bdevs": 4, 00:17:37.322 "num_base_bdevs_discovered": 4, 00:17:37.322 "num_base_bdevs_operational": 4, 00:17:37.322 "process": { 00:17:37.322 "type": "rebuild", 00:17:37.322 "target": "spare", 00:17:37.322 "progress": { 00:17:37.322 "blocks": 107520, 00:17:37.322 "percent": 56 00:17:37.322 } 00:17:37.322 }, 00:17:37.322 "base_bdevs_list": [ 00:17:37.322 { 00:17:37.322 "name": "spare", 00:17:37.322 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:37.322 "is_configured": true, 00:17:37.322 "data_offset": 2048, 00:17:37.322 "data_size": 63488 00:17:37.322 }, 00:17:37.322 { 00:17:37.322 "name": "BaseBdev2", 00:17:37.322 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:37.322 "is_configured": true, 00:17:37.322 "data_offset": 2048, 00:17:37.322 "data_size": 63488 00:17:37.322 }, 00:17:37.322 { 00:17:37.322 "name": "BaseBdev3", 00:17:37.322 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:37.322 "is_configured": true, 00:17:37.322 "data_offset": 2048, 00:17:37.322 "data_size": 63488 00:17:37.322 }, 00:17:37.322 { 00:17:37.322 "name": "BaseBdev4", 00:17:37.322 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:37.322 "is_configured": true, 00:17:37.322 "data_offset": 2048, 00:17:37.322 "data_size": 63488 00:17:37.322 } 00:17:37.322 ] 00:17:37.322 }' 00:17:37.322 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.580 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.580 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.580 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.580 09:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.512 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.512 "name": "raid_bdev1", 00:17:38.512 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:38.512 "strip_size_kb": 64, 00:17:38.512 "state": "online", 00:17:38.512 "raid_level": "raid5f", 00:17:38.512 "superblock": true, 00:17:38.512 "num_base_bdevs": 4, 00:17:38.512 "num_base_bdevs_discovered": 4, 00:17:38.512 "num_base_bdevs_operational": 4, 00:17:38.512 "process": { 00:17:38.512 "type": "rebuild", 00:17:38.512 "target": "spare", 00:17:38.512 "progress": { 00:17:38.512 "blocks": 128640, 00:17:38.512 "percent": 67 00:17:38.512 } 00:17:38.512 }, 00:17:38.512 "base_bdevs_list": [ 00:17:38.512 { 00:17:38.512 "name": "spare", 00:17:38.512 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:38.512 "is_configured": true, 00:17:38.512 "data_offset": 2048, 00:17:38.512 "data_size": 63488 00:17:38.512 }, 00:17:38.512 { 00:17:38.512 "name": "BaseBdev2", 00:17:38.512 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:38.512 "is_configured": true, 00:17:38.512 "data_offset": 2048, 00:17:38.512 "data_size": 63488 00:17:38.512 }, 00:17:38.512 { 00:17:38.512 "name": "BaseBdev3", 00:17:38.512 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:38.512 "is_configured": true, 00:17:38.512 "data_offset": 2048, 00:17:38.512 "data_size": 63488 00:17:38.513 }, 00:17:38.513 { 00:17:38.513 "name": "BaseBdev4", 00:17:38.513 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:38.513 "is_configured": true, 00:17:38.513 "data_offset": 2048, 00:17:38.513 "data_size": 63488 00:17:38.513 } 00:17:38.513 ] 00:17:38.513 }' 00:17:38.513 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.513 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.513 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.770 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.770 09:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.702 09:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.702 09:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.702 09:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.702 09:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.702 09:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.702 09:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.702 09:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.702 09:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.702 09:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.702 09:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.702 09:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.702 09:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.702 "name": "raid_bdev1", 00:17:39.702 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:39.702 "strip_size_kb": 64, 00:17:39.702 "state": "online", 00:17:39.702 "raid_level": "raid5f", 00:17:39.702 "superblock": true, 00:17:39.702 "num_base_bdevs": 4, 00:17:39.702 "num_base_bdevs_discovered": 4, 00:17:39.702 "num_base_bdevs_operational": 4, 00:17:39.702 "process": { 00:17:39.702 "type": "rebuild", 00:17:39.702 "target": "spare", 00:17:39.702 "progress": { 00:17:39.702 "blocks": 151680, 00:17:39.702 "percent": 79 00:17:39.702 } 00:17:39.702 }, 00:17:39.702 "base_bdevs_list": [ 00:17:39.702 { 00:17:39.702 "name": "spare", 00:17:39.702 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:39.702 "is_configured": true, 00:17:39.702 "data_offset": 2048, 00:17:39.702 "data_size": 63488 00:17:39.702 }, 00:17:39.702 { 00:17:39.702 "name": "BaseBdev2", 00:17:39.702 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:39.702 "is_configured": true, 00:17:39.702 "data_offset": 2048, 00:17:39.702 "data_size": 63488 00:17:39.702 }, 00:17:39.702 { 00:17:39.702 "name": "BaseBdev3", 00:17:39.702 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:39.702 "is_configured": true, 00:17:39.702 "data_offset": 2048, 00:17:39.702 "data_size": 63488 00:17:39.702 }, 00:17:39.702 { 00:17:39.702 "name": "BaseBdev4", 00:17:39.702 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:39.702 "is_configured": true, 00:17:39.702 "data_offset": 2048, 00:17:39.702 "data_size": 63488 00:17:39.702 } 00:17:39.702 ] 00:17:39.702 }' 00:17:39.702 09:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.702 09:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.702 09:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.702 09:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.702 09:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.076 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.077 "name": "raid_bdev1", 00:17:41.077 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:41.077 "strip_size_kb": 64, 00:17:41.077 "state": "online", 00:17:41.077 "raid_level": "raid5f", 00:17:41.077 "superblock": true, 00:17:41.077 "num_base_bdevs": 4, 00:17:41.077 "num_base_bdevs_discovered": 4, 00:17:41.077 "num_base_bdevs_operational": 4, 00:17:41.077 "process": { 00:17:41.077 "type": "rebuild", 00:17:41.077 "target": "spare", 00:17:41.077 "progress": { 00:17:41.077 "blocks": 172800, 00:17:41.077 "percent": 90 00:17:41.077 } 00:17:41.077 }, 00:17:41.077 "base_bdevs_list": [ 00:17:41.077 { 00:17:41.077 "name": "spare", 00:17:41.077 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:41.077 "is_configured": true, 00:17:41.077 "data_offset": 2048, 00:17:41.077 "data_size": 63488 00:17:41.077 }, 00:17:41.077 { 00:17:41.077 "name": "BaseBdev2", 00:17:41.077 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:41.077 "is_configured": true, 00:17:41.077 "data_offset": 2048, 00:17:41.077 "data_size": 63488 00:17:41.077 }, 00:17:41.077 { 00:17:41.077 "name": "BaseBdev3", 00:17:41.077 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:41.077 "is_configured": true, 00:17:41.077 "data_offset": 2048, 00:17:41.077 "data_size": 63488 00:17:41.077 }, 00:17:41.077 { 00:17:41.077 "name": "BaseBdev4", 00:17:41.077 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:41.077 "is_configured": true, 00:17:41.077 "data_offset": 2048, 00:17:41.077 "data_size": 63488 00:17:41.077 } 00:17:41.077 ] 00:17:41.077 }' 00:17:41.077 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.077 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.077 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.077 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.077 09:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.693 [2024-11-20 09:30:07.120606] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:41.693 [2024-11-20 09:30:07.120710] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:41.693 [2024-11-20 09:30:07.120877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.950 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.950 "name": "raid_bdev1", 00:17:41.951 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:41.951 "strip_size_kb": 64, 00:17:41.951 "state": "online", 00:17:41.951 "raid_level": "raid5f", 00:17:41.951 "superblock": true, 00:17:41.951 "num_base_bdevs": 4, 00:17:41.951 "num_base_bdevs_discovered": 4, 00:17:41.951 "num_base_bdevs_operational": 4, 00:17:41.951 "base_bdevs_list": [ 00:17:41.951 { 00:17:41.951 "name": "spare", 00:17:41.951 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:41.951 "is_configured": true, 00:17:41.951 "data_offset": 2048, 00:17:41.951 "data_size": 63488 00:17:41.951 }, 00:17:41.951 { 00:17:41.951 "name": "BaseBdev2", 00:17:41.951 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:41.951 "is_configured": true, 00:17:41.951 "data_offset": 2048, 00:17:41.951 "data_size": 63488 00:17:41.951 }, 00:17:41.951 { 00:17:41.951 "name": "BaseBdev3", 00:17:41.951 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:41.951 "is_configured": true, 00:17:41.951 "data_offset": 2048, 00:17:41.951 "data_size": 63488 00:17:41.951 }, 00:17:41.951 { 00:17:41.951 "name": "BaseBdev4", 00:17:41.951 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:41.951 "is_configured": true, 00:17:41.951 "data_offset": 2048, 00:17:41.951 "data_size": 63488 00:17:41.951 } 00:17:41.951 ] 00:17:41.951 }' 00:17:41.951 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.951 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:41.951 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.208 "name": "raid_bdev1", 00:17:42.208 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:42.208 "strip_size_kb": 64, 00:17:42.208 "state": "online", 00:17:42.208 "raid_level": "raid5f", 00:17:42.208 "superblock": true, 00:17:42.208 "num_base_bdevs": 4, 00:17:42.208 "num_base_bdevs_discovered": 4, 00:17:42.208 "num_base_bdevs_operational": 4, 00:17:42.208 "base_bdevs_list": [ 00:17:42.208 { 00:17:42.208 "name": "spare", 00:17:42.208 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:42.208 "is_configured": true, 00:17:42.208 "data_offset": 2048, 00:17:42.208 "data_size": 63488 00:17:42.208 }, 00:17:42.208 { 00:17:42.208 "name": "BaseBdev2", 00:17:42.208 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:42.208 "is_configured": true, 00:17:42.208 "data_offset": 2048, 00:17:42.208 "data_size": 63488 00:17:42.208 }, 00:17:42.208 { 00:17:42.208 "name": "BaseBdev3", 00:17:42.208 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:42.208 "is_configured": true, 00:17:42.208 "data_offset": 2048, 00:17:42.208 "data_size": 63488 00:17:42.208 }, 00:17:42.208 { 00:17:42.208 "name": "BaseBdev4", 00:17:42.208 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:42.208 "is_configured": true, 00:17:42.208 "data_offset": 2048, 00:17:42.208 "data_size": 63488 00:17:42.208 } 00:17:42.208 ] 00:17:42.208 }' 00:17:42.208 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.209 "name": "raid_bdev1", 00:17:42.209 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:42.209 "strip_size_kb": 64, 00:17:42.209 "state": "online", 00:17:42.209 "raid_level": "raid5f", 00:17:42.209 "superblock": true, 00:17:42.209 "num_base_bdevs": 4, 00:17:42.209 "num_base_bdevs_discovered": 4, 00:17:42.209 "num_base_bdevs_operational": 4, 00:17:42.209 "base_bdevs_list": [ 00:17:42.209 { 00:17:42.209 "name": "spare", 00:17:42.209 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:42.209 "is_configured": true, 00:17:42.209 "data_offset": 2048, 00:17:42.209 "data_size": 63488 00:17:42.209 }, 00:17:42.209 { 00:17:42.209 "name": "BaseBdev2", 00:17:42.209 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:42.209 "is_configured": true, 00:17:42.209 "data_offset": 2048, 00:17:42.209 "data_size": 63488 00:17:42.209 }, 00:17:42.209 { 00:17:42.209 "name": "BaseBdev3", 00:17:42.209 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:42.209 "is_configured": true, 00:17:42.209 "data_offset": 2048, 00:17:42.209 "data_size": 63488 00:17:42.209 }, 00:17:42.209 { 00:17:42.209 "name": "BaseBdev4", 00:17:42.209 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:42.209 "is_configured": true, 00:17:42.209 "data_offset": 2048, 00:17:42.209 "data_size": 63488 00:17:42.209 } 00:17:42.209 ] 00:17:42.209 }' 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.209 09:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.776 [2024-11-20 09:30:08.082571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.776 [2024-11-20 09:30:08.082661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.776 [2024-11-20 09:30:08.082793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.776 [2024-11-20 09:30:08.082949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.776 [2024-11-20 09:30:08.082977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.776 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:43.033 /dev/nbd0 00:17:43.033 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:43.033 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:43.033 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:43.033 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:43.033 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.033 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.033 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.034 1+0 records in 00:17:43.034 1+0 records out 00:17:43.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377466 s, 10.9 MB/s 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.034 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:43.291 /dev/nbd1 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.291 1+0 records in 00:17:43.291 1+0 records out 00:17:43.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457084 s, 9.0 MB/s 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.291 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:43.548 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:43.548 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.548 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.548 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.548 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:43.548 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.548 09:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.806 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.063 [2024-11-20 09:30:09.462796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.063 [2024-11-20 09:30:09.462939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.063 [2024-11-20 09:30:09.463015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:44.063 [2024-11-20 09:30:09.463060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.063 [2024-11-20 09:30:09.465905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.063 [2024-11-20 09:30:09.465996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.063 [2024-11-20 09:30:09.466115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:44.063 [2024-11-20 09:30:09.466185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.063 [2024-11-20 09:30:09.466349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.063 [2024-11-20 09:30:09.466474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:44.063 [2024-11-20 09:30:09.466571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:44.063 spare 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.063 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.326 [2024-11-20 09:30:09.566505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:44.326 [2024-11-20 09:30:09.566678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:44.326 [2024-11-20 09:30:09.567141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:44.326 [2024-11-20 09:30:09.576723] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:44.326 [2024-11-20 09:30:09.576816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:44.326 [2024-11-20 09:30:09.577143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.326 "name": "raid_bdev1", 00:17:44.326 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:44.326 "strip_size_kb": 64, 00:17:44.326 "state": "online", 00:17:44.326 "raid_level": "raid5f", 00:17:44.326 "superblock": true, 00:17:44.326 "num_base_bdevs": 4, 00:17:44.326 "num_base_bdevs_discovered": 4, 00:17:44.326 "num_base_bdevs_operational": 4, 00:17:44.326 "base_bdevs_list": [ 00:17:44.326 { 00:17:44.326 "name": "spare", 00:17:44.326 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:44.326 "is_configured": true, 00:17:44.326 "data_offset": 2048, 00:17:44.326 "data_size": 63488 00:17:44.326 }, 00:17:44.326 { 00:17:44.326 "name": "BaseBdev2", 00:17:44.326 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:44.326 "is_configured": true, 00:17:44.326 "data_offset": 2048, 00:17:44.326 "data_size": 63488 00:17:44.326 }, 00:17:44.326 { 00:17:44.326 "name": "BaseBdev3", 00:17:44.326 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:44.326 "is_configured": true, 00:17:44.326 "data_offset": 2048, 00:17:44.326 "data_size": 63488 00:17:44.326 }, 00:17:44.326 { 00:17:44.326 "name": "BaseBdev4", 00:17:44.326 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:44.326 "is_configured": true, 00:17:44.326 "data_offset": 2048, 00:17:44.326 "data_size": 63488 00:17:44.326 } 00:17:44.326 ] 00:17:44.326 }' 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.326 09:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.901 "name": "raid_bdev1", 00:17:44.901 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:44.901 "strip_size_kb": 64, 00:17:44.901 "state": "online", 00:17:44.901 "raid_level": "raid5f", 00:17:44.901 "superblock": true, 00:17:44.901 "num_base_bdevs": 4, 00:17:44.901 "num_base_bdevs_discovered": 4, 00:17:44.901 "num_base_bdevs_operational": 4, 00:17:44.901 "base_bdevs_list": [ 00:17:44.901 { 00:17:44.901 "name": "spare", 00:17:44.901 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:44.901 "is_configured": true, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 }, 00:17:44.901 { 00:17:44.901 "name": "BaseBdev2", 00:17:44.901 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:44.901 "is_configured": true, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 }, 00:17:44.901 { 00:17:44.901 "name": "BaseBdev3", 00:17:44.901 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:44.901 "is_configured": true, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 }, 00:17:44.901 { 00:17:44.901 "name": "BaseBdev4", 00:17:44.901 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:44.901 "is_configured": true, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 } 00:17:44.901 ] 00:17:44.901 }' 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.901 [2024-11-20 09:30:10.275657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.901 "name": "raid_bdev1", 00:17:44.901 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:44.901 "strip_size_kb": 64, 00:17:44.901 "state": "online", 00:17:44.901 "raid_level": "raid5f", 00:17:44.901 "superblock": true, 00:17:44.901 "num_base_bdevs": 4, 00:17:44.901 "num_base_bdevs_discovered": 3, 00:17:44.901 "num_base_bdevs_operational": 3, 00:17:44.901 "base_bdevs_list": [ 00:17:44.901 { 00:17:44.901 "name": null, 00:17:44.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.901 "is_configured": false, 00:17:44.901 "data_offset": 0, 00:17:44.901 "data_size": 63488 00:17:44.901 }, 00:17:44.901 { 00:17:44.901 "name": "BaseBdev2", 00:17:44.901 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:44.901 "is_configured": true, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 }, 00:17:44.901 { 00:17:44.901 "name": "BaseBdev3", 00:17:44.901 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:44.901 "is_configured": true, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 }, 00:17:44.901 { 00:17:44.901 "name": "BaseBdev4", 00:17:44.901 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:44.901 "is_configured": true, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 } 00:17:44.901 ] 00:17:44.901 }' 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.901 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.466 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.466 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.466 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.466 [2024-11-20 09:30:10.735259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.466 [2024-11-20 09:30:10.735555] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.466 [2024-11-20 09:30:10.735647] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:45.466 [2024-11-20 09:30:10.735721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.466 [2024-11-20 09:30:10.754925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:45.466 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.466 09:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:45.466 [2024-11-20 09:30:10.767651] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.398 "name": "raid_bdev1", 00:17:46.398 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:46.398 "strip_size_kb": 64, 00:17:46.398 "state": "online", 00:17:46.398 "raid_level": "raid5f", 00:17:46.398 "superblock": true, 00:17:46.398 "num_base_bdevs": 4, 00:17:46.398 "num_base_bdevs_discovered": 4, 00:17:46.398 "num_base_bdevs_operational": 4, 00:17:46.398 "process": { 00:17:46.398 "type": "rebuild", 00:17:46.398 "target": "spare", 00:17:46.398 "progress": { 00:17:46.398 "blocks": 17280, 00:17:46.398 "percent": 9 00:17:46.398 } 00:17:46.398 }, 00:17:46.398 "base_bdevs_list": [ 00:17:46.398 { 00:17:46.398 "name": "spare", 00:17:46.398 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:46.398 "is_configured": true, 00:17:46.398 "data_offset": 2048, 00:17:46.398 "data_size": 63488 00:17:46.398 }, 00:17:46.398 { 00:17:46.398 "name": "BaseBdev2", 00:17:46.398 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:46.398 "is_configured": true, 00:17:46.398 "data_offset": 2048, 00:17:46.398 "data_size": 63488 00:17:46.398 }, 00:17:46.398 { 00:17:46.398 "name": "BaseBdev3", 00:17:46.398 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:46.398 "is_configured": true, 00:17:46.398 "data_offset": 2048, 00:17:46.398 "data_size": 63488 00:17:46.398 }, 00:17:46.398 { 00:17:46.398 "name": "BaseBdev4", 00:17:46.398 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:46.398 "is_configured": true, 00:17:46.398 "data_offset": 2048, 00:17:46.398 "data_size": 63488 00:17:46.398 } 00:17:46.398 ] 00:17:46.398 }' 00:17:46.398 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.655 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.655 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.655 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.655 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:46.655 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.655 09:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.655 [2024-11-20 09:30:11.907369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.655 [2024-11-20 09:30:11.977801] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:46.655 [2024-11-20 09:30:11.978030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.655 [2024-11-20 09:30:11.978110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.656 [2024-11-20 09:30:11.978158] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.656 "name": "raid_bdev1", 00:17:46.656 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:46.656 "strip_size_kb": 64, 00:17:46.656 "state": "online", 00:17:46.656 "raid_level": "raid5f", 00:17:46.656 "superblock": true, 00:17:46.656 "num_base_bdevs": 4, 00:17:46.656 "num_base_bdevs_discovered": 3, 00:17:46.656 "num_base_bdevs_operational": 3, 00:17:46.656 "base_bdevs_list": [ 00:17:46.656 { 00:17:46.656 "name": null, 00:17:46.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.656 "is_configured": false, 00:17:46.656 "data_offset": 0, 00:17:46.656 "data_size": 63488 00:17:46.656 }, 00:17:46.656 { 00:17:46.656 "name": "BaseBdev2", 00:17:46.656 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:46.656 "is_configured": true, 00:17:46.656 "data_offset": 2048, 00:17:46.656 "data_size": 63488 00:17:46.656 }, 00:17:46.656 { 00:17:46.656 "name": "BaseBdev3", 00:17:46.656 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:46.656 "is_configured": true, 00:17:46.656 "data_offset": 2048, 00:17:46.656 "data_size": 63488 00:17:46.656 }, 00:17:46.656 { 00:17:46.656 "name": "BaseBdev4", 00:17:46.656 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:46.656 "is_configured": true, 00:17:46.656 "data_offset": 2048, 00:17:46.656 "data_size": 63488 00:17:46.656 } 00:17:46.656 ] 00:17:46.656 }' 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.656 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.221 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.221 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.221 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.221 [2024-11-20 09:30:12.478247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.221 [2024-11-20 09:30:12.478386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.221 [2024-11-20 09:30:12.478494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:47.221 [2024-11-20 09:30:12.478545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.221 [2024-11-20 09:30:12.479187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.221 [2024-11-20 09:30:12.479276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.221 [2024-11-20 09:30:12.479457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:47.221 [2024-11-20 09:30:12.479516] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.221 [2024-11-20 09:30:12.479533] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:47.221 [2024-11-20 09:30:12.479577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.221 [2024-11-20 09:30:12.497592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:47.221 spare 00:17:47.221 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.221 09:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:47.221 [2024-11-20 09:30:12.509044] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.161 "name": "raid_bdev1", 00:17:48.161 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:48.161 "strip_size_kb": 64, 00:17:48.161 "state": "online", 00:17:48.161 "raid_level": "raid5f", 00:17:48.161 "superblock": true, 00:17:48.161 "num_base_bdevs": 4, 00:17:48.161 "num_base_bdevs_discovered": 4, 00:17:48.161 "num_base_bdevs_operational": 4, 00:17:48.161 "process": { 00:17:48.161 "type": "rebuild", 00:17:48.161 "target": "spare", 00:17:48.161 "progress": { 00:17:48.161 "blocks": 17280, 00:17:48.161 "percent": 9 00:17:48.161 } 00:17:48.161 }, 00:17:48.161 "base_bdevs_list": [ 00:17:48.161 { 00:17:48.161 "name": "spare", 00:17:48.161 "uuid": "19ad6155-d0f1-5a2e-89ca-7539f3e2a486", 00:17:48.161 "is_configured": true, 00:17:48.161 "data_offset": 2048, 00:17:48.161 "data_size": 63488 00:17:48.161 }, 00:17:48.161 { 00:17:48.161 "name": "BaseBdev2", 00:17:48.161 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:48.161 "is_configured": true, 00:17:48.161 "data_offset": 2048, 00:17:48.161 "data_size": 63488 00:17:48.161 }, 00:17:48.161 { 00:17:48.161 "name": "BaseBdev3", 00:17:48.161 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:48.161 "is_configured": true, 00:17:48.161 "data_offset": 2048, 00:17:48.161 "data_size": 63488 00:17:48.161 }, 00:17:48.161 { 00:17:48.161 "name": "BaseBdev4", 00:17:48.161 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:48.161 "is_configured": true, 00:17:48.161 "data_offset": 2048, 00:17:48.161 "data_size": 63488 00:17:48.161 } 00:17:48.161 ] 00:17:48.161 }' 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.161 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.419 [2024-11-20 09:30:13.660693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.419 [2024-11-20 09:30:13.719221] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.419 [2024-11-20 09:30:13.719395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.419 [2024-11-20 09:30:13.719501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.419 [2024-11-20 09:30:13.719541] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.419 "name": "raid_bdev1", 00:17:48.419 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:48.419 "strip_size_kb": 64, 00:17:48.419 "state": "online", 00:17:48.419 "raid_level": "raid5f", 00:17:48.419 "superblock": true, 00:17:48.419 "num_base_bdevs": 4, 00:17:48.419 "num_base_bdevs_discovered": 3, 00:17:48.419 "num_base_bdevs_operational": 3, 00:17:48.419 "base_bdevs_list": [ 00:17:48.419 { 00:17:48.419 "name": null, 00:17:48.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.419 "is_configured": false, 00:17:48.419 "data_offset": 0, 00:17:48.419 "data_size": 63488 00:17:48.419 }, 00:17:48.419 { 00:17:48.419 "name": "BaseBdev2", 00:17:48.419 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:48.419 "is_configured": true, 00:17:48.419 "data_offset": 2048, 00:17:48.419 "data_size": 63488 00:17:48.419 }, 00:17:48.419 { 00:17:48.419 "name": "BaseBdev3", 00:17:48.419 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:48.419 "is_configured": true, 00:17:48.419 "data_offset": 2048, 00:17:48.419 "data_size": 63488 00:17:48.419 }, 00:17:48.419 { 00:17:48.419 "name": "BaseBdev4", 00:17:48.419 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:48.419 "is_configured": true, 00:17:48.419 "data_offset": 2048, 00:17:48.419 "data_size": 63488 00:17:48.419 } 00:17:48.419 ] 00:17:48.419 }' 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.419 09:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.984 "name": "raid_bdev1", 00:17:48.984 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:48.984 "strip_size_kb": 64, 00:17:48.984 "state": "online", 00:17:48.984 "raid_level": "raid5f", 00:17:48.984 "superblock": true, 00:17:48.984 "num_base_bdevs": 4, 00:17:48.984 "num_base_bdevs_discovered": 3, 00:17:48.984 "num_base_bdevs_operational": 3, 00:17:48.984 "base_bdevs_list": [ 00:17:48.984 { 00:17:48.984 "name": null, 00:17:48.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.984 "is_configured": false, 00:17:48.984 "data_offset": 0, 00:17:48.984 "data_size": 63488 00:17:48.984 }, 00:17:48.984 { 00:17:48.984 "name": "BaseBdev2", 00:17:48.984 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:48.984 "is_configured": true, 00:17:48.984 "data_offset": 2048, 00:17:48.984 "data_size": 63488 00:17:48.984 }, 00:17:48.984 { 00:17:48.984 "name": "BaseBdev3", 00:17:48.984 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:48.984 "is_configured": true, 00:17:48.984 "data_offset": 2048, 00:17:48.984 "data_size": 63488 00:17:48.984 }, 00:17:48.984 { 00:17:48.984 "name": "BaseBdev4", 00:17:48.984 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:48.984 "is_configured": true, 00:17:48.984 "data_offset": 2048, 00:17:48.984 "data_size": 63488 00:17:48.984 } 00:17:48.984 ] 00:17:48.984 }' 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.984 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.984 [2024-11-20 09:30:14.379914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.984 [2024-11-20 09:30:14.380041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.984 [2024-11-20 09:30:14.380082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:48.984 [2024-11-20 09:30:14.380094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.984 [2024-11-20 09:30:14.380677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.984 [2024-11-20 09:30:14.380710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.984 [2024-11-20 09:30:14.380811] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:48.984 [2024-11-20 09:30:14.380828] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.984 [2024-11-20 09:30:14.380844] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:48.985 [2024-11-20 09:30:14.380856] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:48.985 BaseBdev1 00:17:48.985 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.985 09:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.358 "name": "raid_bdev1", 00:17:50.358 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:50.358 "strip_size_kb": 64, 00:17:50.358 "state": "online", 00:17:50.358 "raid_level": "raid5f", 00:17:50.358 "superblock": true, 00:17:50.358 "num_base_bdevs": 4, 00:17:50.358 "num_base_bdevs_discovered": 3, 00:17:50.358 "num_base_bdevs_operational": 3, 00:17:50.358 "base_bdevs_list": [ 00:17:50.358 { 00:17:50.358 "name": null, 00:17:50.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.358 "is_configured": false, 00:17:50.358 "data_offset": 0, 00:17:50.358 "data_size": 63488 00:17:50.358 }, 00:17:50.358 { 00:17:50.358 "name": "BaseBdev2", 00:17:50.358 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:50.358 "is_configured": true, 00:17:50.358 "data_offset": 2048, 00:17:50.358 "data_size": 63488 00:17:50.358 }, 00:17:50.358 { 00:17:50.358 "name": "BaseBdev3", 00:17:50.358 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:50.358 "is_configured": true, 00:17:50.358 "data_offset": 2048, 00:17:50.358 "data_size": 63488 00:17:50.358 }, 00:17:50.358 { 00:17:50.358 "name": "BaseBdev4", 00:17:50.358 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:50.358 "is_configured": true, 00:17:50.358 "data_offset": 2048, 00:17:50.358 "data_size": 63488 00:17:50.358 } 00:17:50.358 ] 00:17:50.358 }' 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.358 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.617 "name": "raid_bdev1", 00:17:50.617 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:50.617 "strip_size_kb": 64, 00:17:50.617 "state": "online", 00:17:50.617 "raid_level": "raid5f", 00:17:50.617 "superblock": true, 00:17:50.617 "num_base_bdevs": 4, 00:17:50.617 "num_base_bdevs_discovered": 3, 00:17:50.617 "num_base_bdevs_operational": 3, 00:17:50.617 "base_bdevs_list": [ 00:17:50.617 { 00:17:50.617 "name": null, 00:17:50.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.617 "is_configured": false, 00:17:50.617 "data_offset": 0, 00:17:50.617 "data_size": 63488 00:17:50.617 }, 00:17:50.617 { 00:17:50.617 "name": "BaseBdev2", 00:17:50.617 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:50.617 "is_configured": true, 00:17:50.617 "data_offset": 2048, 00:17:50.617 "data_size": 63488 00:17:50.617 }, 00:17:50.617 { 00:17:50.617 "name": "BaseBdev3", 00:17:50.617 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:50.617 "is_configured": true, 00:17:50.617 "data_offset": 2048, 00:17:50.617 "data_size": 63488 00:17:50.617 }, 00:17:50.617 { 00:17:50.617 "name": "BaseBdev4", 00:17:50.617 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:50.617 "is_configured": true, 00:17:50.617 "data_offset": 2048, 00:17:50.617 "data_size": 63488 00:17:50.617 } 00:17:50.617 ] 00:17:50.617 }' 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.617 09:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.617 [2024-11-20 09:30:16.041556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.617 [2024-11-20 09:30:16.041820] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.617 [2024-11-20 09:30:16.041893] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:50.617 request: 00:17:50.617 { 00:17:50.617 "base_bdev": "BaseBdev1", 00:17:50.617 "raid_bdev": "raid_bdev1", 00:17:50.617 "method": "bdev_raid_add_base_bdev", 00:17:50.617 "req_id": 1 00:17:50.617 } 00:17:50.617 Got JSON-RPC error response 00:17:50.617 response: 00:17:50.617 { 00:17:50.617 "code": -22, 00:17:50.617 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:50.617 } 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.617 09:30:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.009 "name": "raid_bdev1", 00:17:52.009 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:52.009 "strip_size_kb": 64, 00:17:52.009 "state": "online", 00:17:52.009 "raid_level": "raid5f", 00:17:52.009 "superblock": true, 00:17:52.009 "num_base_bdevs": 4, 00:17:52.009 "num_base_bdevs_discovered": 3, 00:17:52.009 "num_base_bdevs_operational": 3, 00:17:52.009 "base_bdevs_list": [ 00:17:52.009 { 00:17:52.009 "name": null, 00:17:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.009 "is_configured": false, 00:17:52.009 "data_offset": 0, 00:17:52.009 "data_size": 63488 00:17:52.009 }, 00:17:52.009 { 00:17:52.009 "name": "BaseBdev2", 00:17:52.009 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:52.009 "is_configured": true, 00:17:52.009 "data_offset": 2048, 00:17:52.009 "data_size": 63488 00:17:52.009 }, 00:17:52.009 { 00:17:52.009 "name": "BaseBdev3", 00:17:52.009 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:52.009 "is_configured": true, 00:17:52.009 "data_offset": 2048, 00:17:52.009 "data_size": 63488 00:17:52.009 }, 00:17:52.009 { 00:17:52.009 "name": "BaseBdev4", 00:17:52.009 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:52.009 "is_configured": true, 00:17:52.009 "data_offset": 2048, 00:17:52.009 "data_size": 63488 00:17:52.009 } 00:17:52.009 ] 00:17:52.009 }' 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.009 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.267 "name": "raid_bdev1", 00:17:52.267 "uuid": "0aed78d6-0a4b-40d5-be20-69fa75d1d5ac", 00:17:52.267 "strip_size_kb": 64, 00:17:52.267 "state": "online", 00:17:52.267 "raid_level": "raid5f", 00:17:52.267 "superblock": true, 00:17:52.267 "num_base_bdevs": 4, 00:17:52.267 "num_base_bdevs_discovered": 3, 00:17:52.267 "num_base_bdevs_operational": 3, 00:17:52.267 "base_bdevs_list": [ 00:17:52.267 { 00:17:52.267 "name": null, 00:17:52.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.267 "is_configured": false, 00:17:52.267 "data_offset": 0, 00:17:52.267 "data_size": 63488 00:17:52.267 }, 00:17:52.267 { 00:17:52.267 "name": "BaseBdev2", 00:17:52.267 "uuid": "12d01d9a-0808-55ce-8cef-6f259738a574", 00:17:52.267 "is_configured": true, 00:17:52.267 "data_offset": 2048, 00:17:52.267 "data_size": 63488 00:17:52.267 }, 00:17:52.267 { 00:17:52.267 "name": "BaseBdev3", 00:17:52.267 "uuid": "647ed005-9bf7-5701-9544-04ac7a34c643", 00:17:52.267 "is_configured": true, 00:17:52.267 "data_offset": 2048, 00:17:52.267 "data_size": 63488 00:17:52.267 }, 00:17:52.267 { 00:17:52.267 "name": "BaseBdev4", 00:17:52.267 "uuid": "ca87a751-b191-576b-a500-1b6c87bd5dbc", 00:17:52.267 "is_configured": true, 00:17:52.267 "data_offset": 2048, 00:17:52.267 "data_size": 63488 00:17:52.267 } 00:17:52.267 ] 00:17:52.267 }' 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85553 00:17:52.267 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85553 ']' 00:17:52.268 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85553 00:17:52.268 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:52.268 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.268 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85553 00:17:52.268 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.268 killing process with pid 85553 00:17:52.268 Received shutdown signal, test time was about 60.000000 seconds 00:17:52.268 00:17:52.268 Latency(us) 00:17:52.268 [2024-11-20T09:30:17.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.268 [2024-11-20T09:30:17.724Z] =================================================================================================================== 00:17:52.268 [2024-11-20T09:30:17.724Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.268 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.268 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85553' 00:17:52.268 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85553 00:17:52.268 09:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85553 00:17:52.268 [2024-11-20 09:30:17.693814] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.268 [2024-11-20 09:30:17.693970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.268 [2024-11-20 09:30:17.694073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.268 [2024-11-20 09:30:17.694090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:52.834 [2024-11-20 09:30:18.215548] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.210 ************************************ 00:17:54.210 END TEST raid5f_rebuild_test_sb 00:17:54.210 ************************************ 00:17:54.210 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:54.210 00:17:54.210 real 0m27.718s 00:17:54.210 user 0m34.998s 00:17:54.210 sys 0m3.056s 00:17:54.210 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.210 09:30:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.210 09:30:19 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:54.210 09:30:19 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:54.210 09:30:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:54.210 09:30:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.210 09:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.210 ************************************ 00:17:54.210 START TEST raid_state_function_test_sb_4k 00:17:54.210 ************************************ 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86370 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86370' 00:17:54.210 Process raid pid: 86370 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86370 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86370 ']' 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.210 09:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.210 [2024-11-20 09:30:19.559406] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:17:54.210 [2024-11-20 09:30:19.559658] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.469 [2024-11-20 09:30:19.716495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.469 [2024-11-20 09:30:19.841221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.727 [2024-11-20 09:30:20.060483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.727 [2024-11-20 09:30:20.060627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.292 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.292 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:55.292 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.292 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.292 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.292 [2024-11-20 09:30:20.458191] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.292 [2024-11-20 09:30:20.458242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.292 [2024-11-20 09:30:20.458253] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.292 [2024-11-20 09:30:20.458263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.292 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.292 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.292 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.292 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.293 "name": "Existed_Raid", 00:17:55.293 "uuid": "e3ab0b19-94ec-4734-8b9c-e64d54ee9258", 00:17:55.293 "strip_size_kb": 0, 00:17:55.293 "state": "configuring", 00:17:55.293 "raid_level": "raid1", 00:17:55.293 "superblock": true, 00:17:55.293 "num_base_bdevs": 2, 00:17:55.293 "num_base_bdevs_discovered": 0, 00:17:55.293 "num_base_bdevs_operational": 2, 00:17:55.293 "base_bdevs_list": [ 00:17:55.293 { 00:17:55.293 "name": "BaseBdev1", 00:17:55.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.293 "is_configured": false, 00:17:55.293 "data_offset": 0, 00:17:55.293 "data_size": 0 00:17:55.293 }, 00:17:55.293 { 00:17:55.293 "name": "BaseBdev2", 00:17:55.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.293 "is_configured": false, 00:17:55.293 "data_offset": 0, 00:17:55.293 "data_size": 0 00:17:55.293 } 00:17:55.293 ] 00:17:55.293 }' 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.293 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.551 [2024-11-20 09:30:20.917341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.551 [2024-11-20 09:30:20.917440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.551 [2024-11-20 09:30:20.929311] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.551 [2024-11-20 09:30:20.929393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.551 [2024-11-20 09:30:20.929422] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.551 [2024-11-20 09:30:20.929481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.551 [2024-11-20 09:30:20.978021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.551 BaseBdev1 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.551 09:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.551 [ 00:17:55.551 { 00:17:55.809 "name": "BaseBdev1", 00:17:55.809 "aliases": [ 00:17:55.809 "c9b35703-ee44-415c-a7f5-7e0d49ea4186" 00:17:55.809 ], 00:17:55.809 "product_name": "Malloc disk", 00:17:55.809 "block_size": 4096, 00:17:55.809 "num_blocks": 8192, 00:17:55.809 "uuid": "c9b35703-ee44-415c-a7f5-7e0d49ea4186", 00:17:55.809 "assigned_rate_limits": { 00:17:55.809 "rw_ios_per_sec": 0, 00:17:55.809 "rw_mbytes_per_sec": 0, 00:17:55.809 "r_mbytes_per_sec": 0, 00:17:55.809 "w_mbytes_per_sec": 0 00:17:55.809 }, 00:17:55.809 "claimed": true, 00:17:55.809 "claim_type": "exclusive_write", 00:17:55.809 "zoned": false, 00:17:55.809 "supported_io_types": { 00:17:55.809 "read": true, 00:17:55.809 "write": true, 00:17:55.809 "unmap": true, 00:17:55.809 "flush": true, 00:17:55.809 "reset": true, 00:17:55.809 "nvme_admin": false, 00:17:55.809 "nvme_io": false, 00:17:55.809 "nvme_io_md": false, 00:17:55.809 "write_zeroes": true, 00:17:55.809 "zcopy": true, 00:17:55.809 "get_zone_info": false, 00:17:55.809 "zone_management": false, 00:17:55.809 "zone_append": false, 00:17:55.809 "compare": false, 00:17:55.809 "compare_and_write": false, 00:17:55.809 "abort": true, 00:17:55.809 "seek_hole": false, 00:17:55.809 "seek_data": false, 00:17:55.809 "copy": true, 00:17:55.809 "nvme_iov_md": false 00:17:55.809 }, 00:17:55.809 "memory_domains": [ 00:17:55.809 { 00:17:55.809 "dma_device_id": "system", 00:17:55.809 "dma_device_type": 1 00:17:55.809 }, 00:17:55.809 { 00:17:55.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.809 "dma_device_type": 2 00:17:55.809 } 00:17:55.809 ], 00:17:55.809 "driver_specific": {} 00:17:55.809 } 00:17:55.809 ] 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.810 "name": "Existed_Raid", 00:17:55.810 "uuid": "985f60b0-0ded-489f-a2a9-cd74760f8306", 00:17:55.810 "strip_size_kb": 0, 00:17:55.810 "state": "configuring", 00:17:55.810 "raid_level": "raid1", 00:17:55.810 "superblock": true, 00:17:55.810 "num_base_bdevs": 2, 00:17:55.810 "num_base_bdevs_discovered": 1, 00:17:55.810 "num_base_bdevs_operational": 2, 00:17:55.810 "base_bdevs_list": [ 00:17:55.810 { 00:17:55.810 "name": "BaseBdev1", 00:17:55.810 "uuid": "c9b35703-ee44-415c-a7f5-7e0d49ea4186", 00:17:55.810 "is_configured": true, 00:17:55.810 "data_offset": 256, 00:17:55.810 "data_size": 7936 00:17:55.810 }, 00:17:55.810 { 00:17:55.810 "name": "BaseBdev2", 00:17:55.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.810 "is_configured": false, 00:17:55.810 "data_offset": 0, 00:17:55.810 "data_size": 0 00:17:55.810 } 00:17:55.810 ] 00:17:55.810 }' 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.810 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.068 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.068 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.068 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.068 [2024-11-20 09:30:21.505208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.068 [2024-11-20 09:30:21.505319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:56.068 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.068 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:56.068 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.068 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.068 [2024-11-20 09:30:21.517224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.068 [2024-11-20 09:30:21.519228] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.068 [2024-11-20 09:30:21.519312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.325 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.325 "name": "Existed_Raid", 00:17:56.325 "uuid": "5afb172d-c289-4f89-8390-0a95344edbc3", 00:17:56.325 "strip_size_kb": 0, 00:17:56.325 "state": "configuring", 00:17:56.325 "raid_level": "raid1", 00:17:56.325 "superblock": true, 00:17:56.325 "num_base_bdevs": 2, 00:17:56.325 "num_base_bdevs_discovered": 1, 00:17:56.325 "num_base_bdevs_operational": 2, 00:17:56.325 "base_bdevs_list": [ 00:17:56.325 { 00:17:56.325 "name": "BaseBdev1", 00:17:56.325 "uuid": "c9b35703-ee44-415c-a7f5-7e0d49ea4186", 00:17:56.325 "is_configured": true, 00:17:56.325 "data_offset": 256, 00:17:56.325 "data_size": 7936 00:17:56.325 }, 00:17:56.325 { 00:17:56.325 "name": "BaseBdev2", 00:17:56.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.326 "is_configured": false, 00:17:56.326 "data_offset": 0, 00:17:56.326 "data_size": 0 00:17:56.326 } 00:17:56.326 ] 00:17:56.326 }' 00:17:56.326 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.326 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.583 [2024-11-20 09:30:21.985365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.583 [2024-11-20 09:30:21.985693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:56.583 [2024-11-20 09:30:21.985710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:56.583 [2024-11-20 09:30:21.985997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:56.583 [2024-11-20 09:30:21.986176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:56.583 [2024-11-20 09:30:21.986191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:56.583 BaseBdev2 00:17:56.583 [2024-11-20 09:30:21.986373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.583 09:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.583 [ 00:17:56.583 { 00:17:56.583 "name": "BaseBdev2", 00:17:56.583 "aliases": [ 00:17:56.583 "4913f98d-5273-415c-907a-6aefec51971f" 00:17:56.583 ], 00:17:56.583 "product_name": "Malloc disk", 00:17:56.583 "block_size": 4096, 00:17:56.583 "num_blocks": 8192, 00:17:56.583 "uuid": "4913f98d-5273-415c-907a-6aefec51971f", 00:17:56.583 "assigned_rate_limits": { 00:17:56.583 "rw_ios_per_sec": 0, 00:17:56.583 "rw_mbytes_per_sec": 0, 00:17:56.583 "r_mbytes_per_sec": 0, 00:17:56.583 "w_mbytes_per_sec": 0 00:17:56.583 }, 00:17:56.583 "claimed": true, 00:17:56.583 "claim_type": "exclusive_write", 00:17:56.583 "zoned": false, 00:17:56.583 "supported_io_types": { 00:17:56.583 "read": true, 00:17:56.583 "write": true, 00:17:56.583 "unmap": true, 00:17:56.583 "flush": true, 00:17:56.583 "reset": true, 00:17:56.583 "nvme_admin": false, 00:17:56.583 "nvme_io": false, 00:17:56.583 "nvme_io_md": false, 00:17:56.583 "write_zeroes": true, 00:17:56.583 "zcopy": true, 00:17:56.583 "get_zone_info": false, 00:17:56.583 "zone_management": false, 00:17:56.583 "zone_append": false, 00:17:56.583 "compare": false, 00:17:56.583 "compare_and_write": false, 00:17:56.583 "abort": true, 00:17:56.583 "seek_hole": false, 00:17:56.583 "seek_data": false, 00:17:56.583 "copy": true, 00:17:56.583 "nvme_iov_md": false 00:17:56.583 }, 00:17:56.583 "memory_domains": [ 00:17:56.583 { 00:17:56.583 "dma_device_id": "system", 00:17:56.583 "dma_device_type": 1 00:17:56.583 }, 00:17:56.583 { 00:17:56.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.583 "dma_device_type": 2 00:17:56.583 } 00:17:56.583 ], 00:17:56.583 "driver_specific": {} 00:17:56.583 } 00:17:56.583 ] 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.583 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.842 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.842 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.842 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.842 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.842 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.842 "name": "Existed_Raid", 00:17:56.842 "uuid": "5afb172d-c289-4f89-8390-0a95344edbc3", 00:17:56.842 "strip_size_kb": 0, 00:17:56.842 "state": "online", 00:17:56.842 "raid_level": "raid1", 00:17:56.842 "superblock": true, 00:17:56.842 "num_base_bdevs": 2, 00:17:56.842 "num_base_bdevs_discovered": 2, 00:17:56.842 "num_base_bdevs_operational": 2, 00:17:56.842 "base_bdevs_list": [ 00:17:56.842 { 00:17:56.842 "name": "BaseBdev1", 00:17:56.842 "uuid": "c9b35703-ee44-415c-a7f5-7e0d49ea4186", 00:17:56.842 "is_configured": true, 00:17:56.842 "data_offset": 256, 00:17:56.842 "data_size": 7936 00:17:56.842 }, 00:17:56.842 { 00:17:56.842 "name": "BaseBdev2", 00:17:56.842 "uuid": "4913f98d-5273-415c-907a-6aefec51971f", 00:17:56.842 "is_configured": true, 00:17:56.842 "data_offset": 256, 00:17:56.842 "data_size": 7936 00:17:56.842 } 00:17:56.842 ] 00:17:56.842 }' 00:17:56.842 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.842 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.100 [2024-11-20 09:30:22.508874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.100 "name": "Existed_Raid", 00:17:57.100 "aliases": [ 00:17:57.100 "5afb172d-c289-4f89-8390-0a95344edbc3" 00:17:57.100 ], 00:17:57.100 "product_name": "Raid Volume", 00:17:57.100 "block_size": 4096, 00:17:57.100 "num_blocks": 7936, 00:17:57.100 "uuid": "5afb172d-c289-4f89-8390-0a95344edbc3", 00:17:57.100 "assigned_rate_limits": { 00:17:57.100 "rw_ios_per_sec": 0, 00:17:57.100 "rw_mbytes_per_sec": 0, 00:17:57.100 "r_mbytes_per_sec": 0, 00:17:57.100 "w_mbytes_per_sec": 0 00:17:57.100 }, 00:17:57.100 "claimed": false, 00:17:57.100 "zoned": false, 00:17:57.100 "supported_io_types": { 00:17:57.100 "read": true, 00:17:57.100 "write": true, 00:17:57.100 "unmap": false, 00:17:57.100 "flush": false, 00:17:57.100 "reset": true, 00:17:57.100 "nvme_admin": false, 00:17:57.100 "nvme_io": false, 00:17:57.100 "nvme_io_md": false, 00:17:57.100 "write_zeroes": true, 00:17:57.100 "zcopy": false, 00:17:57.100 "get_zone_info": false, 00:17:57.100 "zone_management": false, 00:17:57.100 "zone_append": false, 00:17:57.100 "compare": false, 00:17:57.100 "compare_and_write": false, 00:17:57.100 "abort": false, 00:17:57.100 "seek_hole": false, 00:17:57.100 "seek_data": false, 00:17:57.100 "copy": false, 00:17:57.100 "nvme_iov_md": false 00:17:57.100 }, 00:17:57.100 "memory_domains": [ 00:17:57.100 { 00:17:57.100 "dma_device_id": "system", 00:17:57.100 "dma_device_type": 1 00:17:57.100 }, 00:17:57.100 { 00:17:57.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.100 "dma_device_type": 2 00:17:57.100 }, 00:17:57.100 { 00:17:57.100 "dma_device_id": "system", 00:17:57.100 "dma_device_type": 1 00:17:57.100 }, 00:17:57.100 { 00:17:57.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.100 "dma_device_type": 2 00:17:57.100 } 00:17:57.100 ], 00:17:57.100 "driver_specific": { 00:17:57.100 "raid": { 00:17:57.100 "uuid": "5afb172d-c289-4f89-8390-0a95344edbc3", 00:17:57.100 "strip_size_kb": 0, 00:17:57.100 "state": "online", 00:17:57.100 "raid_level": "raid1", 00:17:57.100 "superblock": true, 00:17:57.100 "num_base_bdevs": 2, 00:17:57.100 "num_base_bdevs_discovered": 2, 00:17:57.100 "num_base_bdevs_operational": 2, 00:17:57.100 "base_bdevs_list": [ 00:17:57.100 { 00:17:57.100 "name": "BaseBdev1", 00:17:57.100 "uuid": "c9b35703-ee44-415c-a7f5-7e0d49ea4186", 00:17:57.100 "is_configured": true, 00:17:57.100 "data_offset": 256, 00:17:57.100 "data_size": 7936 00:17:57.100 }, 00:17:57.100 { 00:17:57.100 "name": "BaseBdev2", 00:17:57.100 "uuid": "4913f98d-5273-415c-907a-6aefec51971f", 00:17:57.100 "is_configured": true, 00:17:57.100 "data_offset": 256, 00:17:57.100 "data_size": 7936 00:17:57.100 } 00:17:57.100 ] 00:17:57.100 } 00:17:57.100 } 00:17:57.100 }' 00:17:57.100 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:57.358 BaseBdev2' 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.358 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.358 [2024-11-20 09:30:22.728257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.617 "name": "Existed_Raid", 00:17:57.617 "uuid": "5afb172d-c289-4f89-8390-0a95344edbc3", 00:17:57.617 "strip_size_kb": 0, 00:17:57.617 "state": "online", 00:17:57.617 "raid_level": "raid1", 00:17:57.617 "superblock": true, 00:17:57.617 "num_base_bdevs": 2, 00:17:57.617 "num_base_bdevs_discovered": 1, 00:17:57.617 "num_base_bdevs_operational": 1, 00:17:57.617 "base_bdevs_list": [ 00:17:57.617 { 00:17:57.617 "name": null, 00:17:57.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.617 "is_configured": false, 00:17:57.617 "data_offset": 0, 00:17:57.617 "data_size": 7936 00:17:57.617 }, 00:17:57.617 { 00:17:57.617 "name": "BaseBdev2", 00:17:57.617 "uuid": "4913f98d-5273-415c-907a-6aefec51971f", 00:17:57.617 "is_configured": true, 00:17:57.617 "data_offset": 256, 00:17:57.617 "data_size": 7936 00:17:57.617 } 00:17:57.617 ] 00:17:57.617 }' 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.617 09:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.876 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.876 [2024-11-20 09:30:23.314119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:57.876 [2024-11-20 09:30:23.314294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.134 [2024-11-20 09:30:23.419284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.134 [2024-11-20 09:30:23.419450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.134 [2024-11-20 09:30:23.419502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:58.134 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86370 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86370 ']' 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86370 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86370 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.135 killing process with pid 86370 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86370' 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86370 00:17:58.135 [2024-11-20 09:30:23.503864] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.135 09:30:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86370 00:17:58.135 [2024-11-20 09:30:23.521090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.510 09:30:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:59.510 00:17:59.510 real 0m5.212s 00:17:59.510 user 0m7.515s 00:17:59.510 sys 0m0.843s 00:17:59.510 09:30:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.510 ************************************ 00:17:59.510 END TEST raid_state_function_test_sb_4k 00:17:59.510 ************************************ 00:17:59.510 09:30:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.510 09:30:24 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:59.510 09:30:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:59.510 09:30:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.510 09:30:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.510 ************************************ 00:17:59.510 START TEST raid_superblock_test_4k 00:17:59.510 ************************************ 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:59.510 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86617 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86617 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86617 ']' 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.511 09:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.511 [2024-11-20 09:30:24.831107] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:17:59.511 [2024-11-20 09:30:24.831331] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86617 ] 00:17:59.769 [2024-11-20 09:30:25.007478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.769 [2024-11-20 09:30:25.130503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.027 [2024-11-20 09:30:25.338402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.027 [2024-11-20 09:30:25.338576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.285 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.543 malloc1 00:18:00.543 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.543 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.543 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.543 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 [2024-11-20 09:30:25.769753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.544 [2024-11-20 09:30:25.769820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.544 [2024-11-20 09:30:25.769847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:00.544 [2024-11-20 09:30:25.769856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.544 [2024-11-20 09:30:25.772230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.544 [2024-11-20 09:30:25.772271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.544 pt1 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 malloc2 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 [2024-11-20 09:30:25.826678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.544 [2024-11-20 09:30:25.826825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.544 [2024-11-20 09:30:25.826872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:00.544 [2024-11-20 09:30:25.826906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.544 [2024-11-20 09:30:25.829291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.544 [2024-11-20 09:30:25.829372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.544 pt2 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 [2024-11-20 09:30:25.838711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.544 [2024-11-20 09:30:25.840731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.544 [2024-11-20 09:30:25.840968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:00.544 [2024-11-20 09:30:25.841025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:00.544 [2024-11-20 09:30:25.841327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:00.544 [2024-11-20 09:30:25.841566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:00.544 [2024-11-20 09:30:25.841619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:00.544 [2024-11-20 09:30:25.841827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.544 "name": "raid_bdev1", 00:18:00.544 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:00.544 "strip_size_kb": 0, 00:18:00.544 "state": "online", 00:18:00.544 "raid_level": "raid1", 00:18:00.544 "superblock": true, 00:18:00.544 "num_base_bdevs": 2, 00:18:00.544 "num_base_bdevs_discovered": 2, 00:18:00.544 "num_base_bdevs_operational": 2, 00:18:00.544 "base_bdevs_list": [ 00:18:00.544 { 00:18:00.544 "name": "pt1", 00:18:00.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.544 "is_configured": true, 00:18:00.544 "data_offset": 256, 00:18:00.544 "data_size": 7936 00:18:00.544 }, 00:18:00.544 { 00:18:00.544 "name": "pt2", 00:18:00.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.544 "is_configured": true, 00:18:00.544 "data_offset": 256, 00:18:00.544 "data_size": 7936 00:18:00.544 } 00:18:00.544 ] 00:18:00.544 }' 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.544 09:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.111 [2024-11-20 09:30:26.326163] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.111 "name": "raid_bdev1", 00:18:01.111 "aliases": [ 00:18:01.111 "39d12092-7191-4dd8-bd1f-aaf27dffbc0d" 00:18:01.111 ], 00:18:01.111 "product_name": "Raid Volume", 00:18:01.111 "block_size": 4096, 00:18:01.111 "num_blocks": 7936, 00:18:01.111 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:01.111 "assigned_rate_limits": { 00:18:01.111 "rw_ios_per_sec": 0, 00:18:01.111 "rw_mbytes_per_sec": 0, 00:18:01.111 "r_mbytes_per_sec": 0, 00:18:01.111 "w_mbytes_per_sec": 0 00:18:01.111 }, 00:18:01.111 "claimed": false, 00:18:01.111 "zoned": false, 00:18:01.111 "supported_io_types": { 00:18:01.111 "read": true, 00:18:01.111 "write": true, 00:18:01.111 "unmap": false, 00:18:01.111 "flush": false, 00:18:01.111 "reset": true, 00:18:01.111 "nvme_admin": false, 00:18:01.111 "nvme_io": false, 00:18:01.111 "nvme_io_md": false, 00:18:01.111 "write_zeroes": true, 00:18:01.111 "zcopy": false, 00:18:01.111 "get_zone_info": false, 00:18:01.111 "zone_management": false, 00:18:01.111 "zone_append": false, 00:18:01.111 "compare": false, 00:18:01.111 "compare_and_write": false, 00:18:01.111 "abort": false, 00:18:01.111 "seek_hole": false, 00:18:01.111 "seek_data": false, 00:18:01.111 "copy": false, 00:18:01.111 "nvme_iov_md": false 00:18:01.111 }, 00:18:01.111 "memory_domains": [ 00:18:01.111 { 00:18:01.111 "dma_device_id": "system", 00:18:01.111 "dma_device_type": 1 00:18:01.111 }, 00:18:01.111 { 00:18:01.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.111 "dma_device_type": 2 00:18:01.111 }, 00:18:01.111 { 00:18:01.111 "dma_device_id": "system", 00:18:01.111 "dma_device_type": 1 00:18:01.111 }, 00:18:01.111 { 00:18:01.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.111 "dma_device_type": 2 00:18:01.111 } 00:18:01.111 ], 00:18:01.111 "driver_specific": { 00:18:01.111 "raid": { 00:18:01.111 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:01.111 "strip_size_kb": 0, 00:18:01.111 "state": "online", 00:18:01.111 "raid_level": "raid1", 00:18:01.111 "superblock": true, 00:18:01.111 "num_base_bdevs": 2, 00:18:01.111 "num_base_bdevs_discovered": 2, 00:18:01.111 "num_base_bdevs_operational": 2, 00:18:01.111 "base_bdevs_list": [ 00:18:01.111 { 00:18:01.111 "name": "pt1", 00:18:01.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.111 "is_configured": true, 00:18:01.111 "data_offset": 256, 00:18:01.111 "data_size": 7936 00:18:01.111 }, 00:18:01.111 { 00:18:01.111 "name": "pt2", 00:18:01.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.111 "is_configured": true, 00:18:01.111 "data_offset": 256, 00:18:01.111 "data_size": 7936 00:18:01.111 } 00:18:01.111 ] 00:18:01.111 } 00:18:01.111 } 00:18:01.111 }' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:01.111 pt2' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.111 [2024-11-20 09:30:26.541777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.111 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=39d12092-7191-4dd8-bd1f-aaf27dffbc0d 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 39d12092-7191-4dd8-bd1f-aaf27dffbc0d ']' 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.400 [2024-11-20 09:30:26.585386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.400 [2024-11-20 09:30:26.585467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.400 [2024-11-20 09:30:26.585598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.400 [2024-11-20 09:30:26.585685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.400 [2024-11-20 09:30:26.585742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.400 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.400 [2024-11-20 09:30:26.701272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:01.400 [2024-11-20 09:30:26.703423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:01.400 [2024-11-20 09:30:26.703516] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:01.400 [2024-11-20 09:30:26.703584] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:01.400 [2024-11-20 09:30:26.703601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.400 [2024-11-20 09:30:26.703614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:01.400 request: 00:18:01.400 { 00:18:01.400 "name": "raid_bdev1", 00:18:01.400 "raid_level": "raid1", 00:18:01.400 "base_bdevs": [ 00:18:01.400 "malloc1", 00:18:01.400 "malloc2" 00:18:01.401 ], 00:18:01.401 "superblock": false, 00:18:01.401 "method": "bdev_raid_create", 00:18:01.401 "req_id": 1 00:18:01.401 } 00:18:01.401 Got JSON-RPC error response 00:18:01.401 response: 00:18:01.401 { 00:18:01.401 "code": -17, 00:18:01.401 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:01.401 } 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.401 [2024-11-20 09:30:26.761129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.401 [2024-11-20 09:30:26.761278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.401 [2024-11-20 09:30:26.761318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:01.401 [2024-11-20 09:30:26.761355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.401 [2024-11-20 09:30:26.763783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.401 [2024-11-20 09:30:26.763877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.401 [2024-11-20 09:30:26.764020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:01.401 [2024-11-20 09:30:26.764127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.401 pt1 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.401 "name": "raid_bdev1", 00:18:01.401 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:01.401 "strip_size_kb": 0, 00:18:01.401 "state": "configuring", 00:18:01.401 "raid_level": "raid1", 00:18:01.401 "superblock": true, 00:18:01.401 "num_base_bdevs": 2, 00:18:01.401 "num_base_bdevs_discovered": 1, 00:18:01.401 "num_base_bdevs_operational": 2, 00:18:01.401 "base_bdevs_list": [ 00:18:01.401 { 00:18:01.401 "name": "pt1", 00:18:01.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.401 "is_configured": true, 00:18:01.401 "data_offset": 256, 00:18:01.401 "data_size": 7936 00:18:01.401 }, 00:18:01.401 { 00:18:01.401 "name": null, 00:18:01.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.401 "is_configured": false, 00:18:01.401 "data_offset": 256, 00:18:01.401 "data_size": 7936 00:18:01.401 } 00:18:01.401 ] 00:18:01.401 }' 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.401 09:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.972 [2024-11-20 09:30:27.245221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.972 [2024-11-20 09:30:27.245329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.972 [2024-11-20 09:30:27.245371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:01.972 [2024-11-20 09:30:27.245388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.972 [2024-11-20 09:30:27.246163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.972 [2024-11-20 09:30:27.246204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.972 [2024-11-20 09:30:27.246389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:01.972 [2024-11-20 09:30:27.246444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.972 [2024-11-20 09:30:27.246632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:01.972 [2024-11-20 09:30:27.246654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:01.972 [2024-11-20 09:30:27.246932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:01.972 [2024-11-20 09:30:27.247140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:01.972 [2024-11-20 09:30:27.247152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:01.972 [2024-11-20 09:30:27.247343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.972 pt2 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.972 "name": "raid_bdev1", 00:18:01.972 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:01.972 "strip_size_kb": 0, 00:18:01.972 "state": "online", 00:18:01.972 "raid_level": "raid1", 00:18:01.972 "superblock": true, 00:18:01.972 "num_base_bdevs": 2, 00:18:01.972 "num_base_bdevs_discovered": 2, 00:18:01.972 "num_base_bdevs_operational": 2, 00:18:01.972 "base_bdevs_list": [ 00:18:01.972 { 00:18:01.972 "name": "pt1", 00:18:01.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.972 "is_configured": true, 00:18:01.972 "data_offset": 256, 00:18:01.972 "data_size": 7936 00:18:01.972 }, 00:18:01.972 { 00:18:01.972 "name": "pt2", 00:18:01.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.972 "is_configured": true, 00:18:01.972 "data_offset": 256, 00:18:01.972 "data_size": 7936 00:18:01.972 } 00:18:01.972 ] 00:18:01.972 }' 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.972 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.540 [2024-11-20 09:30:27.716698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.540 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.540 "name": "raid_bdev1", 00:18:02.540 "aliases": [ 00:18:02.540 "39d12092-7191-4dd8-bd1f-aaf27dffbc0d" 00:18:02.540 ], 00:18:02.540 "product_name": "Raid Volume", 00:18:02.540 "block_size": 4096, 00:18:02.540 "num_blocks": 7936, 00:18:02.540 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:02.540 "assigned_rate_limits": { 00:18:02.540 "rw_ios_per_sec": 0, 00:18:02.540 "rw_mbytes_per_sec": 0, 00:18:02.540 "r_mbytes_per_sec": 0, 00:18:02.540 "w_mbytes_per_sec": 0 00:18:02.540 }, 00:18:02.540 "claimed": false, 00:18:02.540 "zoned": false, 00:18:02.540 "supported_io_types": { 00:18:02.540 "read": true, 00:18:02.540 "write": true, 00:18:02.540 "unmap": false, 00:18:02.540 "flush": false, 00:18:02.540 "reset": true, 00:18:02.540 "nvme_admin": false, 00:18:02.540 "nvme_io": false, 00:18:02.540 "nvme_io_md": false, 00:18:02.540 "write_zeroes": true, 00:18:02.540 "zcopy": false, 00:18:02.540 "get_zone_info": false, 00:18:02.540 "zone_management": false, 00:18:02.540 "zone_append": false, 00:18:02.540 "compare": false, 00:18:02.540 "compare_and_write": false, 00:18:02.540 "abort": false, 00:18:02.540 "seek_hole": false, 00:18:02.540 "seek_data": false, 00:18:02.540 "copy": false, 00:18:02.540 "nvme_iov_md": false 00:18:02.540 }, 00:18:02.540 "memory_domains": [ 00:18:02.540 { 00:18:02.540 "dma_device_id": "system", 00:18:02.540 "dma_device_type": 1 00:18:02.540 }, 00:18:02.540 { 00:18:02.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.540 "dma_device_type": 2 00:18:02.540 }, 00:18:02.540 { 00:18:02.540 "dma_device_id": "system", 00:18:02.540 "dma_device_type": 1 00:18:02.540 }, 00:18:02.540 { 00:18:02.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.541 "dma_device_type": 2 00:18:02.541 } 00:18:02.541 ], 00:18:02.541 "driver_specific": { 00:18:02.541 "raid": { 00:18:02.541 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:02.541 "strip_size_kb": 0, 00:18:02.541 "state": "online", 00:18:02.541 "raid_level": "raid1", 00:18:02.541 "superblock": true, 00:18:02.541 "num_base_bdevs": 2, 00:18:02.541 "num_base_bdevs_discovered": 2, 00:18:02.541 "num_base_bdevs_operational": 2, 00:18:02.541 "base_bdevs_list": [ 00:18:02.541 { 00:18:02.541 "name": "pt1", 00:18:02.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.541 "is_configured": true, 00:18:02.541 "data_offset": 256, 00:18:02.541 "data_size": 7936 00:18:02.541 }, 00:18:02.541 { 00:18:02.541 "name": "pt2", 00:18:02.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.541 "is_configured": true, 00:18:02.541 "data_offset": 256, 00:18:02.541 "data_size": 7936 00:18:02.541 } 00:18:02.541 ] 00:18:02.541 } 00:18:02.541 } 00:18:02.541 }' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:02.541 pt2' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.541 [2024-11-20 09:30:27.948208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.541 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.800 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 39d12092-7191-4dd8-bd1f-aaf27dffbc0d '!=' 39d12092-7191-4dd8-bd1f-aaf27dffbc0d ']' 00:18:02.800 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:02.800 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:02.800 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:02.800 09:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.800 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.800 09:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.800 [2024-11-20 09:30:27.999972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.800 "name": "raid_bdev1", 00:18:02.800 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:02.800 "strip_size_kb": 0, 00:18:02.800 "state": "online", 00:18:02.800 "raid_level": "raid1", 00:18:02.800 "superblock": true, 00:18:02.800 "num_base_bdevs": 2, 00:18:02.800 "num_base_bdevs_discovered": 1, 00:18:02.800 "num_base_bdevs_operational": 1, 00:18:02.800 "base_bdevs_list": [ 00:18:02.800 { 00:18:02.800 "name": null, 00:18:02.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.800 "is_configured": false, 00:18:02.800 "data_offset": 0, 00:18:02.800 "data_size": 7936 00:18:02.800 }, 00:18:02.800 { 00:18:02.800 "name": "pt2", 00:18:02.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.800 "is_configured": true, 00:18:02.800 "data_offset": 256, 00:18:02.800 "data_size": 7936 00:18:02.800 } 00:18:02.800 ] 00:18:02.800 }' 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.800 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.059 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.059 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.059 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.059 [2024-11-20 09:30:28.475145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.059 [2024-11-20 09:30:28.475198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.059 [2024-11-20 09:30:28.475294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.059 [2024-11-20 09:30:28.475347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.059 [2024-11-20 09:30:28.475360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:03.059 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.059 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.059 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.059 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:03.059 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.059 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.318 [2024-11-20 09:30:28.555036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.318 [2024-11-20 09:30:28.555238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.318 [2024-11-20 09:30:28.555285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:03.318 [2024-11-20 09:30:28.555354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.318 [2024-11-20 09:30:28.557774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.318 [2024-11-20 09:30:28.557878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.318 [2024-11-20 09:30:28.557992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:03.318 [2024-11-20 09:30:28.558073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.318 [2024-11-20 09:30:28.558210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:03.318 [2024-11-20 09:30:28.558225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.318 [2024-11-20 09:30:28.558507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:03.318 [2024-11-20 09:30:28.558695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:03.318 [2024-11-20 09:30:28.558706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:03.318 [2024-11-20 09:30:28.558942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.318 pt2 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.318 "name": "raid_bdev1", 00:18:03.318 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:03.318 "strip_size_kb": 0, 00:18:03.318 "state": "online", 00:18:03.318 "raid_level": "raid1", 00:18:03.318 "superblock": true, 00:18:03.318 "num_base_bdevs": 2, 00:18:03.318 "num_base_bdevs_discovered": 1, 00:18:03.318 "num_base_bdevs_operational": 1, 00:18:03.318 "base_bdevs_list": [ 00:18:03.318 { 00:18:03.318 "name": null, 00:18:03.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.318 "is_configured": false, 00:18:03.318 "data_offset": 256, 00:18:03.318 "data_size": 7936 00:18:03.318 }, 00:18:03.318 { 00:18:03.318 "name": "pt2", 00:18:03.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.318 "is_configured": true, 00:18:03.318 "data_offset": 256, 00:18:03.318 "data_size": 7936 00:18:03.318 } 00:18:03.318 ] 00:18:03.318 }' 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.318 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.578 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.578 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.578 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.578 [2024-11-20 09:30:28.942301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.578 [2024-11-20 09:30:28.942402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.578 [2024-11-20 09:30:28.942530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.578 [2024-11-20 09:30:28.942627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.578 [2024-11-20 09:30:28.942684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:03.578 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.578 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.578 09:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:03.578 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.578 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.578 09:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.578 [2024-11-20 09:30:29.010215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.578 [2024-11-20 09:30:29.010290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.578 [2024-11-20 09:30:29.010314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:03.578 [2024-11-20 09:30:29.010324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.578 [2024-11-20 09:30:29.012775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.578 [2024-11-20 09:30:29.012816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.578 [2024-11-20 09:30:29.012913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:03.578 [2024-11-20 09:30:29.012963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.578 [2024-11-20 09:30:29.013120] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:03.578 [2024-11-20 09:30:29.013132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.578 [2024-11-20 09:30:29.013149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:03.578 [2024-11-20 09:30:29.013245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.578 [2024-11-20 09:30:29.013331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:03.578 [2024-11-20 09:30:29.013347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.578 [2024-11-20 09:30:29.013624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:03.578 [2024-11-20 09:30:29.013782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:03.578 [2024-11-20 09:30:29.013796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:03.578 [2024-11-20 09:30:29.013982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.578 pt1 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.578 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.839 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.839 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.839 "name": "raid_bdev1", 00:18:03.839 "uuid": "39d12092-7191-4dd8-bd1f-aaf27dffbc0d", 00:18:03.839 "strip_size_kb": 0, 00:18:03.839 "state": "online", 00:18:03.839 "raid_level": "raid1", 00:18:03.839 "superblock": true, 00:18:03.839 "num_base_bdevs": 2, 00:18:03.839 "num_base_bdevs_discovered": 1, 00:18:03.839 "num_base_bdevs_operational": 1, 00:18:03.839 "base_bdevs_list": [ 00:18:03.839 { 00:18:03.839 "name": null, 00:18:03.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.839 "is_configured": false, 00:18:03.839 "data_offset": 256, 00:18:03.839 "data_size": 7936 00:18:03.840 }, 00:18:03.840 { 00:18:03.840 "name": "pt2", 00:18:03.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.840 "is_configured": true, 00:18:03.840 "data_offset": 256, 00:18:03.840 "data_size": 7936 00:18:03.840 } 00:18:03.840 ] 00:18:03.840 }' 00:18:03.840 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.840 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.099 [2024-11-20 09:30:29.505662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 39d12092-7191-4dd8-bd1f-aaf27dffbc0d '!=' 39d12092-7191-4dd8-bd1f-aaf27dffbc0d ']' 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86617 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86617 ']' 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86617 00:18:04.099 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:04.358 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.358 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86617 00:18:04.358 killing process with pid 86617 00:18:04.358 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.358 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.358 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86617' 00:18:04.358 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86617 00:18:04.359 [2024-11-20 09:30:29.590017] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.359 [2024-11-20 09:30:29.590108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.359 [2024-11-20 09:30:29.590154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.359 [2024-11-20 09:30:29.590167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:04.359 09:30:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86617 00:18:04.618 [2024-11-20 09:30:29.815593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.587 09:30:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:05.587 00:18:05.587 real 0m6.246s 00:18:05.587 user 0m9.430s 00:18:05.587 sys 0m1.136s 00:18:05.587 09:30:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.587 09:30:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.587 ************************************ 00:18:05.587 END TEST raid_superblock_test_4k 00:18:05.587 ************************************ 00:18:05.587 09:30:31 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:05.587 09:30:31 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:05.587 09:30:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:05.587 09:30:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.587 09:30:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.846 ************************************ 00:18:05.846 START TEST raid_rebuild_test_sb_4k 00:18:05.846 ************************************ 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86948 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86948 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86948 ']' 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.846 09:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.846 [2024-11-20 09:30:31.152288] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:05.846 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:05.846 Zero copy mechanism will not be used. 00:18:05.846 [2024-11-20 09:30:31.152519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86948 ] 00:18:06.105 [2024-11-20 09:30:31.328154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.105 [2024-11-20 09:30:31.444644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.364 [2024-11-20 09:30:31.649055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.364 [2024-11-20 09:30:31.649193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.623 BaseBdev1_malloc 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.623 [2024-11-20 09:30:32.058399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.623 [2024-11-20 09:30:32.058482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.623 [2024-11-20 09:30:32.058505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.623 [2024-11-20 09:30:32.058517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.623 [2024-11-20 09:30:32.060782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.623 [2024-11-20 09:30:32.060890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.623 BaseBdev1 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.623 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.884 BaseBdev2_malloc 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.884 [2024-11-20 09:30:32.112483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:06.884 [2024-11-20 09:30:32.112541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.884 [2024-11-20 09:30:32.112560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.884 [2024-11-20 09:30:32.112570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.884 [2024-11-20 09:30:32.114600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.884 [2024-11-20 09:30:32.114647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:06.884 BaseBdev2 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.884 spare_malloc 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.884 spare_delay 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.884 [2024-11-20 09:30:32.185555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.884 [2024-11-20 09:30:32.185615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.884 [2024-11-20 09:30:32.185634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:06.884 [2024-11-20 09:30:32.185645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.884 [2024-11-20 09:30:32.187860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.884 [2024-11-20 09:30:32.187954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.884 spare 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.884 [2024-11-20 09:30:32.197636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.884 [2024-11-20 09:30:32.199379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.884 [2024-11-20 09:30:32.199574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:06.884 [2024-11-20 09:30:32.199592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.884 [2024-11-20 09:30:32.199854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:06.884 [2024-11-20 09:30:32.200026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:06.884 [2024-11-20 09:30:32.200034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:06.884 [2024-11-20 09:30:32.200192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.884 "name": "raid_bdev1", 00:18:06.884 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:06.884 "strip_size_kb": 0, 00:18:06.884 "state": "online", 00:18:06.884 "raid_level": "raid1", 00:18:06.884 "superblock": true, 00:18:06.884 "num_base_bdevs": 2, 00:18:06.884 "num_base_bdevs_discovered": 2, 00:18:06.884 "num_base_bdevs_operational": 2, 00:18:06.884 "base_bdevs_list": [ 00:18:06.884 { 00:18:06.884 "name": "BaseBdev1", 00:18:06.884 "uuid": "bcb62b0e-42f2-5478-a740-737501f46ff9", 00:18:06.884 "is_configured": true, 00:18:06.884 "data_offset": 256, 00:18:06.884 "data_size": 7936 00:18:06.884 }, 00:18:06.884 { 00:18:06.884 "name": "BaseBdev2", 00:18:06.884 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:06.884 "is_configured": true, 00:18:06.884 "data_offset": 256, 00:18:06.884 "data_size": 7936 00:18:06.884 } 00:18:06.884 ] 00:18:06.884 }' 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.884 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.451 [2024-11-20 09:30:32.633186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.451 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.452 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:07.710 [2024-11-20 09:30:32.916494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:07.710 /dev/nbd0 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.710 1+0 records in 00:18:07.710 1+0 records out 00:18:07.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474351 s, 8.6 MB/s 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:07.710 09:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:08.276 7936+0 records in 00:18:08.276 7936+0 records out 00:18:08.276 32505856 bytes (33 MB, 31 MiB) copied, 0.674469 s, 48.2 MB/s 00:18:08.276 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:08.276 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.276 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:08.276 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.276 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:08.276 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.276 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:08.535 [2024-11-20 09:30:33.896945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.535 [2024-11-20 09:30:33.917612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.535 "name": "raid_bdev1", 00:18:08.535 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:08.535 "strip_size_kb": 0, 00:18:08.535 "state": "online", 00:18:08.535 "raid_level": "raid1", 00:18:08.535 "superblock": true, 00:18:08.535 "num_base_bdevs": 2, 00:18:08.535 "num_base_bdevs_discovered": 1, 00:18:08.535 "num_base_bdevs_operational": 1, 00:18:08.535 "base_bdevs_list": [ 00:18:08.535 { 00:18:08.535 "name": null, 00:18:08.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.535 "is_configured": false, 00:18:08.535 "data_offset": 0, 00:18:08.535 "data_size": 7936 00:18:08.535 }, 00:18:08.535 { 00:18:08.535 "name": "BaseBdev2", 00:18:08.535 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:08.535 "is_configured": true, 00:18:08.535 "data_offset": 256, 00:18:08.535 "data_size": 7936 00:18:08.535 } 00:18:08.535 ] 00:18:08.535 }' 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.535 09:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.103 09:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:09.103 09:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.103 09:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.103 [2024-11-20 09:30:34.364880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.103 [2024-11-20 09:30:34.383672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:09.103 09:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.103 09:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:09.103 [2024-11-20 09:30:34.385718] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.043 "name": "raid_bdev1", 00:18:10.043 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:10.043 "strip_size_kb": 0, 00:18:10.043 "state": "online", 00:18:10.043 "raid_level": "raid1", 00:18:10.043 "superblock": true, 00:18:10.043 "num_base_bdevs": 2, 00:18:10.043 "num_base_bdevs_discovered": 2, 00:18:10.043 "num_base_bdevs_operational": 2, 00:18:10.043 "process": { 00:18:10.043 "type": "rebuild", 00:18:10.043 "target": "spare", 00:18:10.043 "progress": { 00:18:10.043 "blocks": 2560, 00:18:10.043 "percent": 32 00:18:10.043 } 00:18:10.043 }, 00:18:10.043 "base_bdevs_list": [ 00:18:10.043 { 00:18:10.043 "name": "spare", 00:18:10.043 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:10.043 "is_configured": true, 00:18:10.043 "data_offset": 256, 00:18:10.043 "data_size": 7936 00:18:10.043 }, 00:18:10.043 { 00:18:10.043 "name": "BaseBdev2", 00:18:10.043 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:10.043 "is_configured": true, 00:18:10.043 "data_offset": 256, 00:18:10.043 "data_size": 7936 00:18:10.043 } 00:18:10.043 ] 00:18:10.043 }' 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.043 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.303 [2024-11-20 09:30:35.517305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.303 [2024-11-20 09:30:35.591548] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:10.303 [2024-11-20 09:30:35.591665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.303 [2024-11-20 09:30:35.591682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.303 [2024-11-20 09:30:35.591691] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.303 "name": "raid_bdev1", 00:18:10.303 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:10.303 "strip_size_kb": 0, 00:18:10.303 "state": "online", 00:18:10.303 "raid_level": "raid1", 00:18:10.303 "superblock": true, 00:18:10.303 "num_base_bdevs": 2, 00:18:10.303 "num_base_bdevs_discovered": 1, 00:18:10.303 "num_base_bdevs_operational": 1, 00:18:10.303 "base_bdevs_list": [ 00:18:10.303 { 00:18:10.303 "name": null, 00:18:10.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.303 "is_configured": false, 00:18:10.303 "data_offset": 0, 00:18:10.303 "data_size": 7936 00:18:10.303 }, 00:18:10.303 { 00:18:10.303 "name": "BaseBdev2", 00:18:10.303 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:10.303 "is_configured": true, 00:18:10.303 "data_offset": 256, 00:18:10.303 "data_size": 7936 00:18:10.303 } 00:18:10.303 ] 00:18:10.303 }' 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.303 09:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.873 "name": "raid_bdev1", 00:18:10.873 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:10.873 "strip_size_kb": 0, 00:18:10.873 "state": "online", 00:18:10.873 "raid_level": "raid1", 00:18:10.873 "superblock": true, 00:18:10.873 "num_base_bdevs": 2, 00:18:10.873 "num_base_bdevs_discovered": 1, 00:18:10.873 "num_base_bdevs_operational": 1, 00:18:10.873 "base_bdevs_list": [ 00:18:10.873 { 00:18:10.873 "name": null, 00:18:10.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.873 "is_configured": false, 00:18:10.873 "data_offset": 0, 00:18:10.873 "data_size": 7936 00:18:10.873 }, 00:18:10.873 { 00:18:10.873 "name": "BaseBdev2", 00:18:10.873 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:10.873 "is_configured": true, 00:18:10.873 "data_offset": 256, 00:18:10.873 "data_size": 7936 00:18:10.873 } 00:18:10.873 ] 00:18:10.873 }' 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.873 [2024-11-20 09:30:36.215325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.873 [2024-11-20 09:30:36.231824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.873 09:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:10.873 [2024-11-20 09:30:36.233701] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.811 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.811 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.811 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.811 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.811 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.811 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.811 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.811 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.811 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.070 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.070 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.070 "name": "raid_bdev1", 00:18:12.070 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:12.070 "strip_size_kb": 0, 00:18:12.070 "state": "online", 00:18:12.070 "raid_level": "raid1", 00:18:12.070 "superblock": true, 00:18:12.070 "num_base_bdevs": 2, 00:18:12.070 "num_base_bdevs_discovered": 2, 00:18:12.070 "num_base_bdevs_operational": 2, 00:18:12.070 "process": { 00:18:12.070 "type": "rebuild", 00:18:12.070 "target": "spare", 00:18:12.070 "progress": { 00:18:12.070 "blocks": 2560, 00:18:12.070 "percent": 32 00:18:12.070 } 00:18:12.070 }, 00:18:12.070 "base_bdevs_list": [ 00:18:12.070 { 00:18:12.070 "name": "spare", 00:18:12.070 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:12.070 "is_configured": true, 00:18:12.070 "data_offset": 256, 00:18:12.070 "data_size": 7936 00:18:12.070 }, 00:18:12.070 { 00:18:12.070 "name": "BaseBdev2", 00:18:12.070 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:12.070 "is_configured": true, 00:18:12.070 "data_offset": 256, 00:18:12.070 "data_size": 7936 00:18:12.070 } 00:18:12.070 ] 00:18:12.070 }' 00:18:12.070 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.070 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.070 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:12.071 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=712 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.071 "name": "raid_bdev1", 00:18:12.071 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:12.071 "strip_size_kb": 0, 00:18:12.071 "state": "online", 00:18:12.071 "raid_level": "raid1", 00:18:12.071 "superblock": true, 00:18:12.071 "num_base_bdevs": 2, 00:18:12.071 "num_base_bdevs_discovered": 2, 00:18:12.071 "num_base_bdevs_operational": 2, 00:18:12.071 "process": { 00:18:12.071 "type": "rebuild", 00:18:12.071 "target": "spare", 00:18:12.071 "progress": { 00:18:12.071 "blocks": 2816, 00:18:12.071 "percent": 35 00:18:12.071 } 00:18:12.071 }, 00:18:12.071 "base_bdevs_list": [ 00:18:12.071 { 00:18:12.071 "name": "spare", 00:18:12.071 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:12.071 "is_configured": true, 00:18:12.071 "data_offset": 256, 00:18:12.071 "data_size": 7936 00:18:12.071 }, 00:18:12.071 { 00:18:12.071 "name": "BaseBdev2", 00:18:12.071 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:12.071 "is_configured": true, 00:18:12.071 "data_offset": 256, 00:18:12.071 "data_size": 7936 00:18:12.071 } 00:18:12.071 ] 00:18:12.071 }' 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.071 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.330 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.330 09:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.269 "name": "raid_bdev1", 00:18:13.269 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:13.269 "strip_size_kb": 0, 00:18:13.269 "state": "online", 00:18:13.269 "raid_level": "raid1", 00:18:13.269 "superblock": true, 00:18:13.269 "num_base_bdevs": 2, 00:18:13.269 "num_base_bdevs_discovered": 2, 00:18:13.269 "num_base_bdevs_operational": 2, 00:18:13.269 "process": { 00:18:13.269 "type": "rebuild", 00:18:13.269 "target": "spare", 00:18:13.269 "progress": { 00:18:13.269 "blocks": 5888, 00:18:13.269 "percent": 74 00:18:13.269 } 00:18:13.269 }, 00:18:13.269 "base_bdevs_list": [ 00:18:13.269 { 00:18:13.269 "name": "spare", 00:18:13.269 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:13.269 "is_configured": true, 00:18:13.269 "data_offset": 256, 00:18:13.269 "data_size": 7936 00:18:13.269 }, 00:18:13.269 { 00:18:13.269 "name": "BaseBdev2", 00:18:13.269 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:13.269 "is_configured": true, 00:18:13.269 "data_offset": 256, 00:18:13.269 "data_size": 7936 00:18:13.269 } 00:18:13.269 ] 00:18:13.269 }' 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.269 09:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.213 [2024-11-20 09:30:39.349005] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:14.213 [2024-11-20 09:30:39.349233] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:14.213 [2024-11-20 09:30:39.349438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.472 "name": "raid_bdev1", 00:18:14.472 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:14.472 "strip_size_kb": 0, 00:18:14.472 "state": "online", 00:18:14.472 "raid_level": "raid1", 00:18:14.472 "superblock": true, 00:18:14.472 "num_base_bdevs": 2, 00:18:14.472 "num_base_bdevs_discovered": 2, 00:18:14.472 "num_base_bdevs_operational": 2, 00:18:14.472 "base_bdevs_list": [ 00:18:14.472 { 00:18:14.472 "name": "spare", 00:18:14.472 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:14.472 "is_configured": true, 00:18:14.472 "data_offset": 256, 00:18:14.472 "data_size": 7936 00:18:14.472 }, 00:18:14.472 { 00:18:14.472 "name": "BaseBdev2", 00:18:14.472 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:14.472 "is_configured": true, 00:18:14.472 "data_offset": 256, 00:18:14.472 "data_size": 7936 00:18:14.472 } 00:18:14.472 ] 00:18:14.472 }' 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.472 "name": "raid_bdev1", 00:18:14.472 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:14.472 "strip_size_kb": 0, 00:18:14.472 "state": "online", 00:18:14.472 "raid_level": "raid1", 00:18:14.472 "superblock": true, 00:18:14.472 "num_base_bdevs": 2, 00:18:14.472 "num_base_bdevs_discovered": 2, 00:18:14.472 "num_base_bdevs_operational": 2, 00:18:14.472 "base_bdevs_list": [ 00:18:14.472 { 00:18:14.472 "name": "spare", 00:18:14.472 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:14.472 "is_configured": true, 00:18:14.472 "data_offset": 256, 00:18:14.472 "data_size": 7936 00:18:14.472 }, 00:18:14.472 { 00:18:14.472 "name": "BaseBdev2", 00:18:14.472 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:14.472 "is_configured": true, 00:18:14.472 "data_offset": 256, 00:18:14.472 "data_size": 7936 00:18:14.472 } 00:18:14.472 ] 00:18:14.472 }' 00:18:14.472 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.731 09:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.731 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.731 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.731 "name": "raid_bdev1", 00:18:14.731 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:14.731 "strip_size_kb": 0, 00:18:14.731 "state": "online", 00:18:14.731 "raid_level": "raid1", 00:18:14.731 "superblock": true, 00:18:14.731 "num_base_bdevs": 2, 00:18:14.731 "num_base_bdevs_discovered": 2, 00:18:14.731 "num_base_bdevs_operational": 2, 00:18:14.731 "base_bdevs_list": [ 00:18:14.731 { 00:18:14.731 "name": "spare", 00:18:14.731 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:14.731 "is_configured": true, 00:18:14.731 "data_offset": 256, 00:18:14.731 "data_size": 7936 00:18:14.731 }, 00:18:14.731 { 00:18:14.731 "name": "BaseBdev2", 00:18:14.731 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:14.731 "is_configured": true, 00:18:14.731 "data_offset": 256, 00:18:14.731 "data_size": 7936 00:18:14.731 } 00:18:14.731 ] 00:18:14.732 }' 00:18:14.732 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.732 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.299 [2024-11-20 09:30:40.478961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.299 [2024-11-20 09:30:40.478997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.299 [2024-11-20 09:30:40.479095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.299 [2024-11-20 09:30:40.479185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.299 [2024-11-20 09:30:40.479200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.299 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:15.558 /dev/nbd0 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.558 1+0 records in 00:18:15.558 1+0 records out 00:18:15.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396746 s, 10.3 MB/s 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.558 09:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:15.817 /dev/nbd1 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:15.818 1+0 records in 00:18:15.818 1+0 records out 00:18:15.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371621 s, 11.0 MB/s 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:15.818 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:16.077 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:16.077 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.077 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:16.077 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:16.077 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:16.077 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.077 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.336 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.595 09:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.595 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.595 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.596 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.596 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.596 [2024-11-20 09:30:42.017270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.596 [2024-11-20 09:30:42.017466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.596 [2024-11-20 09:30:42.017504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:16.596 [2024-11-20 09:30:42.017515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.596 [2024-11-20 09:30:42.020134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.596 [2024-11-20 09:30:42.020196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.596 [2024-11-20 09:30:42.020305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:16.596 [2024-11-20 09:30:42.020374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.596 [2024-11-20 09:30:42.020647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.596 spare 00:18:16.596 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.596 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:16.596 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.596 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.861 [2024-11-20 09:30:42.120612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:16.861 [2024-11-20 09:30:42.120680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:16.861 [2024-11-20 09:30:42.121075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:16.861 [2024-11-20 09:30:42.121314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:16.861 [2024-11-20 09:30:42.121326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:16.861 [2024-11-20 09:30:42.121597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.861 "name": "raid_bdev1", 00:18:16.861 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:16.861 "strip_size_kb": 0, 00:18:16.861 "state": "online", 00:18:16.861 "raid_level": "raid1", 00:18:16.861 "superblock": true, 00:18:16.861 "num_base_bdevs": 2, 00:18:16.861 "num_base_bdevs_discovered": 2, 00:18:16.861 "num_base_bdevs_operational": 2, 00:18:16.861 "base_bdevs_list": [ 00:18:16.861 { 00:18:16.861 "name": "spare", 00:18:16.861 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:16.861 "is_configured": true, 00:18:16.861 "data_offset": 256, 00:18:16.861 "data_size": 7936 00:18:16.861 }, 00:18:16.861 { 00:18:16.861 "name": "BaseBdev2", 00:18:16.861 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:16.861 "is_configured": true, 00:18:16.861 "data_offset": 256, 00:18:16.861 "data_size": 7936 00:18:16.861 } 00:18:16.861 ] 00:18:16.861 }' 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.861 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.440 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.440 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.440 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.440 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.440 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.441 "name": "raid_bdev1", 00:18:17.441 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:17.441 "strip_size_kb": 0, 00:18:17.441 "state": "online", 00:18:17.441 "raid_level": "raid1", 00:18:17.441 "superblock": true, 00:18:17.441 "num_base_bdevs": 2, 00:18:17.441 "num_base_bdevs_discovered": 2, 00:18:17.441 "num_base_bdevs_operational": 2, 00:18:17.441 "base_bdevs_list": [ 00:18:17.441 { 00:18:17.441 "name": "spare", 00:18:17.441 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:17.441 "is_configured": true, 00:18:17.441 "data_offset": 256, 00:18:17.441 "data_size": 7936 00:18:17.441 }, 00:18:17.441 { 00:18:17.441 "name": "BaseBdev2", 00:18:17.441 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:17.441 "is_configured": true, 00:18:17.441 "data_offset": 256, 00:18:17.441 "data_size": 7936 00:18:17.441 } 00:18:17.441 ] 00:18:17.441 }' 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.441 [2024-11-20 09:30:42.820411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.441 "name": "raid_bdev1", 00:18:17.441 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:17.441 "strip_size_kb": 0, 00:18:17.441 "state": "online", 00:18:17.441 "raid_level": "raid1", 00:18:17.441 "superblock": true, 00:18:17.441 "num_base_bdevs": 2, 00:18:17.441 "num_base_bdevs_discovered": 1, 00:18:17.441 "num_base_bdevs_operational": 1, 00:18:17.441 "base_bdevs_list": [ 00:18:17.441 { 00:18:17.441 "name": null, 00:18:17.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.441 "is_configured": false, 00:18:17.441 "data_offset": 0, 00:18:17.441 "data_size": 7936 00:18:17.441 }, 00:18:17.441 { 00:18:17.441 "name": "BaseBdev2", 00:18:17.441 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:17.441 "is_configured": true, 00:18:17.441 "data_offset": 256, 00:18:17.441 "data_size": 7936 00:18:17.441 } 00:18:17.441 ] 00:18:17.441 }' 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.441 09:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.008 09:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:18.008 09:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.008 09:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.008 [2024-11-20 09:30:43.315608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.008 [2024-11-20 09:30:43.315959] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:18.008 [2024-11-20 09:30:43.316036] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:18.008 [2024-11-20 09:30:43.316120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.008 [2024-11-20 09:30:43.334585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:18.008 09:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.008 09:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:18.008 [2024-11-20 09:30:43.336970] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.944 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.944 "name": "raid_bdev1", 00:18:18.944 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:18.944 "strip_size_kb": 0, 00:18:18.944 "state": "online", 00:18:18.944 "raid_level": "raid1", 00:18:18.944 "superblock": true, 00:18:18.944 "num_base_bdevs": 2, 00:18:18.944 "num_base_bdevs_discovered": 2, 00:18:18.944 "num_base_bdevs_operational": 2, 00:18:18.944 "process": { 00:18:18.944 "type": "rebuild", 00:18:18.944 "target": "spare", 00:18:18.944 "progress": { 00:18:18.944 "blocks": 2560, 00:18:18.944 "percent": 32 00:18:18.944 } 00:18:18.944 }, 00:18:18.944 "base_bdevs_list": [ 00:18:18.944 { 00:18:18.944 "name": "spare", 00:18:18.944 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:18.944 "is_configured": true, 00:18:18.944 "data_offset": 256, 00:18:18.944 "data_size": 7936 00:18:18.944 }, 00:18:18.944 { 00:18:18.944 "name": "BaseBdev2", 00:18:18.944 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:18.944 "is_configured": true, 00:18:18.944 "data_offset": 256, 00:18:18.944 "data_size": 7936 00:18:18.944 } 00:18:18.944 ] 00:18:18.944 }' 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.204 [2024-11-20 09:30:44.512400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.204 [2024-11-20 09:30:44.543014] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:19.204 [2024-11-20 09:30:44.543089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.204 [2024-11-20 09:30:44.543106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.204 [2024-11-20 09:30:44.543116] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.204 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.205 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.205 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.205 "name": "raid_bdev1", 00:18:19.205 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:19.205 "strip_size_kb": 0, 00:18:19.205 "state": "online", 00:18:19.205 "raid_level": "raid1", 00:18:19.205 "superblock": true, 00:18:19.205 "num_base_bdevs": 2, 00:18:19.205 "num_base_bdevs_discovered": 1, 00:18:19.205 "num_base_bdevs_operational": 1, 00:18:19.205 "base_bdevs_list": [ 00:18:19.205 { 00:18:19.205 "name": null, 00:18:19.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.205 "is_configured": false, 00:18:19.205 "data_offset": 0, 00:18:19.205 "data_size": 7936 00:18:19.205 }, 00:18:19.205 { 00:18:19.205 "name": "BaseBdev2", 00:18:19.205 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:19.205 "is_configured": true, 00:18:19.205 "data_offset": 256, 00:18:19.205 "data_size": 7936 00:18:19.205 } 00:18:19.205 ] 00:18:19.205 }' 00:18:19.205 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.205 09:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.773 09:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:19.773 09:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.773 09:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.773 [2024-11-20 09:30:45.047455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:19.773 [2024-11-20 09:30:45.047645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.773 [2024-11-20 09:30:45.047693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:19.773 [2024-11-20 09:30:45.047750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.773 [2024-11-20 09:30:45.048327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.773 [2024-11-20 09:30:45.048402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:19.773 [2024-11-20 09:30:45.048567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:19.773 [2024-11-20 09:30:45.048626] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.773 [2024-11-20 09:30:45.048678] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:19.773 [2024-11-20 09:30:45.048725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.773 [2024-11-20 09:30:45.067853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:19.773 spare 00:18:19.773 09:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.773 [2024-11-20 09:30:45.069877] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.773 09:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.729 "name": "raid_bdev1", 00:18:20.729 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:20.729 "strip_size_kb": 0, 00:18:20.729 "state": "online", 00:18:20.729 "raid_level": "raid1", 00:18:20.729 "superblock": true, 00:18:20.729 "num_base_bdevs": 2, 00:18:20.729 "num_base_bdevs_discovered": 2, 00:18:20.729 "num_base_bdevs_operational": 2, 00:18:20.729 "process": { 00:18:20.729 "type": "rebuild", 00:18:20.729 "target": "spare", 00:18:20.729 "progress": { 00:18:20.729 "blocks": 2560, 00:18:20.729 "percent": 32 00:18:20.729 } 00:18:20.729 }, 00:18:20.729 "base_bdevs_list": [ 00:18:20.729 { 00:18:20.729 "name": "spare", 00:18:20.729 "uuid": "572db8ab-402f-5cd1-b48c-f4a1b9f4f8be", 00:18:20.729 "is_configured": true, 00:18:20.729 "data_offset": 256, 00:18:20.729 "data_size": 7936 00:18:20.729 }, 00:18:20.729 { 00:18:20.729 "name": "BaseBdev2", 00:18:20.729 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:20.729 "is_configured": true, 00:18:20.729 "data_offset": 256, 00:18:20.729 "data_size": 7936 00:18:20.729 } 00:18:20.729 ] 00:18:20.729 }' 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.729 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.990 [2024-11-20 09:30:46.213648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.990 [2024-11-20 09:30:46.275820] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:20.990 [2024-11-20 09:30:46.276033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.990 [2024-11-20 09:30:46.276059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.990 [2024-11-20 09:30:46.276069] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.990 "name": "raid_bdev1", 00:18:20.990 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:20.990 "strip_size_kb": 0, 00:18:20.990 "state": "online", 00:18:20.990 "raid_level": "raid1", 00:18:20.990 "superblock": true, 00:18:20.990 "num_base_bdevs": 2, 00:18:20.990 "num_base_bdevs_discovered": 1, 00:18:20.990 "num_base_bdevs_operational": 1, 00:18:20.990 "base_bdevs_list": [ 00:18:20.990 { 00:18:20.990 "name": null, 00:18:20.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.990 "is_configured": false, 00:18:20.990 "data_offset": 0, 00:18:20.990 "data_size": 7936 00:18:20.990 }, 00:18:20.990 { 00:18:20.990 "name": "BaseBdev2", 00:18:20.990 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:20.990 "is_configured": true, 00:18:20.990 "data_offset": 256, 00:18:20.990 "data_size": 7936 00:18:20.990 } 00:18:20.990 ] 00:18:20.990 }' 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.990 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.557 "name": "raid_bdev1", 00:18:21.557 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:21.557 "strip_size_kb": 0, 00:18:21.557 "state": "online", 00:18:21.557 "raid_level": "raid1", 00:18:21.557 "superblock": true, 00:18:21.557 "num_base_bdevs": 2, 00:18:21.557 "num_base_bdevs_discovered": 1, 00:18:21.557 "num_base_bdevs_operational": 1, 00:18:21.557 "base_bdevs_list": [ 00:18:21.557 { 00:18:21.557 "name": null, 00:18:21.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.557 "is_configured": false, 00:18:21.557 "data_offset": 0, 00:18:21.557 "data_size": 7936 00:18:21.557 }, 00:18:21.557 { 00:18:21.557 "name": "BaseBdev2", 00:18:21.557 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:21.557 "is_configured": true, 00:18:21.557 "data_offset": 256, 00:18:21.557 "data_size": 7936 00:18:21.557 } 00:18:21.557 ] 00:18:21.557 }' 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.557 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.557 [2024-11-20 09:30:46.984628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:21.557 [2024-11-20 09:30:46.984711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.557 [2024-11-20 09:30:46.984737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:21.557 [2024-11-20 09:30:46.984758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.557 [2024-11-20 09:30:46.985405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.557 [2024-11-20 09:30:46.985447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:21.558 [2024-11-20 09:30:46.985584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:21.558 [2024-11-20 09:30:46.985606] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:21.558 [2024-11-20 09:30:46.985624] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:21.558 [2024-11-20 09:30:46.985639] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:21.558 BaseBdev1 00:18:21.558 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.558 09:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.934 09:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.934 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.934 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.934 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.934 "name": "raid_bdev1", 00:18:22.934 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:22.934 "strip_size_kb": 0, 00:18:22.934 "state": "online", 00:18:22.934 "raid_level": "raid1", 00:18:22.934 "superblock": true, 00:18:22.934 "num_base_bdevs": 2, 00:18:22.934 "num_base_bdevs_discovered": 1, 00:18:22.934 "num_base_bdevs_operational": 1, 00:18:22.934 "base_bdevs_list": [ 00:18:22.934 { 00:18:22.934 "name": null, 00:18:22.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.934 "is_configured": false, 00:18:22.934 "data_offset": 0, 00:18:22.934 "data_size": 7936 00:18:22.934 }, 00:18:22.934 { 00:18:22.934 "name": "BaseBdev2", 00:18:22.934 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:22.934 "is_configured": true, 00:18:22.934 "data_offset": 256, 00:18:22.934 "data_size": 7936 00:18:22.934 } 00:18:22.934 ] 00:18:22.934 }' 00:18:22.934 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.934 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.193 "name": "raid_bdev1", 00:18:23.193 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:23.193 "strip_size_kb": 0, 00:18:23.193 "state": "online", 00:18:23.193 "raid_level": "raid1", 00:18:23.193 "superblock": true, 00:18:23.193 "num_base_bdevs": 2, 00:18:23.193 "num_base_bdevs_discovered": 1, 00:18:23.193 "num_base_bdevs_operational": 1, 00:18:23.193 "base_bdevs_list": [ 00:18:23.193 { 00:18:23.193 "name": null, 00:18:23.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.193 "is_configured": false, 00:18:23.193 "data_offset": 0, 00:18:23.193 "data_size": 7936 00:18:23.193 }, 00:18:23.193 { 00:18:23.193 "name": "BaseBdev2", 00:18:23.193 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:23.193 "is_configured": true, 00:18:23.193 "data_offset": 256, 00:18:23.193 "data_size": 7936 00:18:23.193 } 00:18:23.193 ] 00:18:23.193 }' 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.193 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.193 [2024-11-20 09:30:48.642244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.193 [2024-11-20 09:30:48.642572] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.193 [2024-11-20 09:30:48.642647] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:23.452 request: 00:18:23.452 { 00:18:23.452 "base_bdev": "BaseBdev1", 00:18:23.452 "raid_bdev": "raid_bdev1", 00:18:23.452 "method": "bdev_raid_add_base_bdev", 00:18:23.452 "req_id": 1 00:18:23.452 } 00:18:23.452 Got JSON-RPC error response 00:18:23.452 response: 00:18:23.452 { 00:18:23.452 "code": -22, 00:18:23.453 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:23.453 } 00:18:23.453 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:23.453 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:23.453 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.453 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.453 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.453 09:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.387 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.387 "name": "raid_bdev1", 00:18:24.387 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:24.387 "strip_size_kb": 0, 00:18:24.387 "state": "online", 00:18:24.387 "raid_level": "raid1", 00:18:24.387 "superblock": true, 00:18:24.387 "num_base_bdevs": 2, 00:18:24.387 "num_base_bdevs_discovered": 1, 00:18:24.387 "num_base_bdevs_operational": 1, 00:18:24.387 "base_bdevs_list": [ 00:18:24.387 { 00:18:24.387 "name": null, 00:18:24.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.387 "is_configured": false, 00:18:24.387 "data_offset": 0, 00:18:24.387 "data_size": 7936 00:18:24.387 }, 00:18:24.387 { 00:18:24.387 "name": "BaseBdev2", 00:18:24.387 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:24.387 "is_configured": true, 00:18:24.387 "data_offset": 256, 00:18:24.387 "data_size": 7936 00:18:24.387 } 00:18:24.387 ] 00:18:24.388 }' 00:18:24.388 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.388 09:30:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.955 "name": "raid_bdev1", 00:18:24.955 "uuid": "5291e292-91df-4910-9898-45f7451259b1", 00:18:24.955 "strip_size_kb": 0, 00:18:24.955 "state": "online", 00:18:24.955 "raid_level": "raid1", 00:18:24.955 "superblock": true, 00:18:24.955 "num_base_bdevs": 2, 00:18:24.955 "num_base_bdevs_discovered": 1, 00:18:24.955 "num_base_bdevs_operational": 1, 00:18:24.955 "base_bdevs_list": [ 00:18:24.955 { 00:18:24.955 "name": null, 00:18:24.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.955 "is_configured": false, 00:18:24.955 "data_offset": 0, 00:18:24.955 "data_size": 7936 00:18:24.955 }, 00:18:24.955 { 00:18:24.955 "name": "BaseBdev2", 00:18:24.955 "uuid": "07d45ad7-a0eb-5cb1-82b0-22e0d39cffb4", 00:18:24.955 "is_configured": true, 00:18:24.955 "data_offset": 256, 00:18:24.955 "data_size": 7936 00:18:24.955 } 00:18:24.955 ] 00:18:24.955 }' 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86948 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86948 ']' 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86948 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86948 00:18:24.955 killing process with pid 86948 00:18:24.955 Received shutdown signal, test time was about 60.000000 seconds 00:18:24.955 00:18:24.955 Latency(us) 00:18:24.955 [2024-11-20T09:30:50.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.955 [2024-11-20T09:30:50.411Z] =================================================================================================================== 00:18:24.955 [2024-11-20T09:30:50.411Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86948' 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86948 00:18:24.955 [2024-11-20 09:30:50.319222] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.955 [2024-11-20 09:30:50.319401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.955 09:30:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86948 00:18:24.955 [2024-11-20 09:30:50.319481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.955 [2024-11-20 09:30:50.319495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:25.522 [2024-11-20 09:30:50.679730] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.901 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:26.901 00:18:26.901 real 0m20.922s 00:18:26.901 user 0m27.426s 00:18:26.901 sys 0m2.848s 00:18:26.901 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.901 ************************************ 00:18:26.901 END TEST raid_rebuild_test_sb_4k 00:18:26.901 ************************************ 00:18:26.901 09:30:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.901 09:30:52 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:26.901 09:30:52 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:26.901 09:30:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:26.901 09:30:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.901 09:30:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.901 ************************************ 00:18:26.901 START TEST raid_state_function_test_sb_md_separate 00:18:26.902 ************************************ 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:26.902 Process raid pid: 87649 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87649 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87649' 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87649 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87649 ']' 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.902 09:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.902 [2024-11-20 09:30:52.214670] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:26.902 [2024-11-20 09:30:52.214999] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.161 [2024-11-20 09:30:52.403265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.161 [2024-11-20 09:30:52.539789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.420 [2024-11-20 09:30:52.781166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.420 [2024-11-20 09:30:52.781324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.679 [2024-11-20 09:30:53.075463] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.679 [2024-11-20 09:30:53.075623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.679 [2024-11-20 09:30:53.075666] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.679 [2024-11-20 09:30:53.075693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.679 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.937 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.937 "name": "Existed_Raid", 00:18:27.937 "uuid": "c3e653fa-df91-4026-bad1-c30685337162", 00:18:27.937 "strip_size_kb": 0, 00:18:27.937 "state": "configuring", 00:18:27.937 "raid_level": "raid1", 00:18:27.937 "superblock": true, 00:18:27.937 "num_base_bdevs": 2, 00:18:27.937 "num_base_bdevs_discovered": 0, 00:18:27.937 "num_base_bdevs_operational": 2, 00:18:27.937 "base_bdevs_list": [ 00:18:27.937 { 00:18:27.937 "name": "BaseBdev1", 00:18:27.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.937 "is_configured": false, 00:18:27.937 "data_offset": 0, 00:18:27.937 "data_size": 0 00:18:27.937 }, 00:18:27.937 { 00:18:27.937 "name": "BaseBdev2", 00:18:27.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.937 "is_configured": false, 00:18:27.937 "data_offset": 0, 00:18:27.937 "data_size": 0 00:18:27.937 } 00:18:27.937 ] 00:18:27.937 }' 00:18:27.937 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.937 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.196 [2024-11-20 09:30:53.574916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.196 [2024-11-20 09:30:53.575086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.196 [2024-11-20 09:30:53.586684] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:28.196 [2024-11-20 09:30:53.586808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:28.196 [2024-11-20 09:30:53.586842] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.196 [2024-11-20 09:30:53.586874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:28.196 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.197 [2024-11-20 09:30:53.640458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.197 BaseBdev1 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.197 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.457 [ 00:18:28.457 { 00:18:28.457 "name": "BaseBdev1", 00:18:28.457 "aliases": [ 00:18:28.457 "051e6d45-b345-4e49-97bb-7196ffa19201" 00:18:28.457 ], 00:18:28.457 "product_name": "Malloc disk", 00:18:28.457 "block_size": 4096, 00:18:28.457 "num_blocks": 8192, 00:18:28.457 "uuid": "051e6d45-b345-4e49-97bb-7196ffa19201", 00:18:28.457 "md_size": 32, 00:18:28.457 "md_interleave": false, 00:18:28.457 "dif_type": 0, 00:18:28.457 "assigned_rate_limits": { 00:18:28.457 "rw_ios_per_sec": 0, 00:18:28.457 "rw_mbytes_per_sec": 0, 00:18:28.457 "r_mbytes_per_sec": 0, 00:18:28.457 "w_mbytes_per_sec": 0 00:18:28.457 }, 00:18:28.457 "claimed": true, 00:18:28.457 "claim_type": "exclusive_write", 00:18:28.457 "zoned": false, 00:18:28.457 "supported_io_types": { 00:18:28.457 "read": true, 00:18:28.457 "write": true, 00:18:28.457 "unmap": true, 00:18:28.457 "flush": true, 00:18:28.457 "reset": true, 00:18:28.457 "nvme_admin": false, 00:18:28.457 "nvme_io": false, 00:18:28.457 "nvme_io_md": false, 00:18:28.457 "write_zeroes": true, 00:18:28.457 "zcopy": true, 00:18:28.457 "get_zone_info": false, 00:18:28.457 "zone_management": false, 00:18:28.457 "zone_append": false, 00:18:28.457 "compare": false, 00:18:28.457 "compare_and_write": false, 00:18:28.457 "abort": true, 00:18:28.457 "seek_hole": false, 00:18:28.457 "seek_data": false, 00:18:28.457 "copy": true, 00:18:28.457 "nvme_iov_md": false 00:18:28.457 }, 00:18:28.457 "memory_domains": [ 00:18:28.457 { 00:18:28.457 "dma_device_id": "system", 00:18:28.457 "dma_device_type": 1 00:18:28.457 }, 00:18:28.457 { 00:18:28.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.457 "dma_device_type": 2 00:18:28.457 } 00:18:28.457 ], 00:18:28.457 "driver_specific": {} 00:18:28.457 } 00:18:28.457 ] 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.457 "name": "Existed_Raid", 00:18:28.457 "uuid": "56867a33-b73d-40f4-ab24-4e413e91b5dd", 00:18:28.457 "strip_size_kb": 0, 00:18:28.457 "state": "configuring", 00:18:28.457 "raid_level": "raid1", 00:18:28.457 "superblock": true, 00:18:28.457 "num_base_bdevs": 2, 00:18:28.457 "num_base_bdevs_discovered": 1, 00:18:28.457 "num_base_bdevs_operational": 2, 00:18:28.457 "base_bdevs_list": [ 00:18:28.457 { 00:18:28.457 "name": "BaseBdev1", 00:18:28.457 "uuid": "051e6d45-b345-4e49-97bb-7196ffa19201", 00:18:28.457 "is_configured": true, 00:18:28.457 "data_offset": 256, 00:18:28.457 "data_size": 7936 00:18:28.457 }, 00:18:28.457 { 00:18:28.457 "name": "BaseBdev2", 00:18:28.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.457 "is_configured": false, 00:18:28.457 "data_offset": 0, 00:18:28.457 "data_size": 0 00:18:28.457 } 00:18:28.457 ] 00:18:28.457 }' 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.457 09:30:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.717 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:28.717 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.006 [2024-11-20 09:30:54.175668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:29.006 [2024-11-20 09:30:54.175749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.006 [2024-11-20 09:30:54.187704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.006 [2024-11-20 09:30:54.189926] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:29.006 [2024-11-20 09:30:54.190021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.006 "name": "Existed_Raid", 00:18:29.006 "uuid": "ebf6898f-c2f0-44c1-a4d1-4fcefa6aeb27", 00:18:29.006 "strip_size_kb": 0, 00:18:29.006 "state": "configuring", 00:18:29.006 "raid_level": "raid1", 00:18:29.006 "superblock": true, 00:18:29.006 "num_base_bdevs": 2, 00:18:29.006 "num_base_bdevs_discovered": 1, 00:18:29.006 "num_base_bdevs_operational": 2, 00:18:29.006 "base_bdevs_list": [ 00:18:29.006 { 00:18:29.006 "name": "BaseBdev1", 00:18:29.006 "uuid": "051e6d45-b345-4e49-97bb-7196ffa19201", 00:18:29.006 "is_configured": true, 00:18:29.006 "data_offset": 256, 00:18:29.006 "data_size": 7936 00:18:29.006 }, 00:18:29.006 { 00:18:29.006 "name": "BaseBdev2", 00:18:29.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.006 "is_configured": false, 00:18:29.006 "data_offset": 0, 00:18:29.006 "data_size": 0 00:18:29.006 } 00:18:29.006 ] 00:18:29.006 }' 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.006 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.288 [2024-11-20 09:30:54.684865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.288 [2024-11-20 09:30:54.685144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:29.288 [2024-11-20 09:30:54.685161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:29.288 [2024-11-20 09:30:54.685256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:29.288 [2024-11-20 09:30:54.685386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:29.288 [2024-11-20 09:30:54.685398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:29.288 [2024-11-20 09:30:54.685531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.288 BaseBdev2 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.288 [ 00:18:29.288 { 00:18:29.288 "name": "BaseBdev2", 00:18:29.288 "aliases": [ 00:18:29.288 "d5a6c1fd-c934-46d5-8438-6aa27e1ebdb4" 00:18:29.288 ], 00:18:29.288 "product_name": "Malloc disk", 00:18:29.288 "block_size": 4096, 00:18:29.288 "num_blocks": 8192, 00:18:29.288 "uuid": "d5a6c1fd-c934-46d5-8438-6aa27e1ebdb4", 00:18:29.288 "md_size": 32, 00:18:29.288 "md_interleave": false, 00:18:29.288 "dif_type": 0, 00:18:29.288 "assigned_rate_limits": { 00:18:29.288 "rw_ios_per_sec": 0, 00:18:29.288 "rw_mbytes_per_sec": 0, 00:18:29.288 "r_mbytes_per_sec": 0, 00:18:29.288 "w_mbytes_per_sec": 0 00:18:29.288 }, 00:18:29.288 "claimed": true, 00:18:29.288 "claim_type": "exclusive_write", 00:18:29.288 "zoned": false, 00:18:29.288 "supported_io_types": { 00:18:29.288 "read": true, 00:18:29.288 "write": true, 00:18:29.288 "unmap": true, 00:18:29.288 "flush": true, 00:18:29.288 "reset": true, 00:18:29.288 "nvme_admin": false, 00:18:29.288 "nvme_io": false, 00:18:29.288 "nvme_io_md": false, 00:18:29.288 "write_zeroes": true, 00:18:29.288 "zcopy": true, 00:18:29.288 "get_zone_info": false, 00:18:29.288 "zone_management": false, 00:18:29.288 "zone_append": false, 00:18:29.288 "compare": false, 00:18:29.288 "compare_and_write": false, 00:18:29.288 "abort": true, 00:18:29.288 "seek_hole": false, 00:18:29.288 "seek_data": false, 00:18:29.288 "copy": true, 00:18:29.288 "nvme_iov_md": false 00:18:29.288 }, 00:18:29.288 "memory_domains": [ 00:18:29.288 { 00:18:29.288 "dma_device_id": "system", 00:18:29.288 "dma_device_type": 1 00:18:29.288 }, 00:18:29.288 { 00:18:29.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.288 "dma_device_type": 2 00:18:29.288 } 00:18:29.288 ], 00:18:29.288 "driver_specific": {} 00:18:29.288 } 00:18:29.288 ] 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.288 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.547 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.547 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.547 "name": "Existed_Raid", 00:18:29.547 "uuid": "ebf6898f-c2f0-44c1-a4d1-4fcefa6aeb27", 00:18:29.547 "strip_size_kb": 0, 00:18:29.547 "state": "online", 00:18:29.547 "raid_level": "raid1", 00:18:29.547 "superblock": true, 00:18:29.547 "num_base_bdevs": 2, 00:18:29.547 "num_base_bdevs_discovered": 2, 00:18:29.547 "num_base_bdevs_operational": 2, 00:18:29.547 "base_bdevs_list": [ 00:18:29.547 { 00:18:29.547 "name": "BaseBdev1", 00:18:29.547 "uuid": "051e6d45-b345-4e49-97bb-7196ffa19201", 00:18:29.547 "is_configured": true, 00:18:29.547 "data_offset": 256, 00:18:29.547 "data_size": 7936 00:18:29.547 }, 00:18:29.547 { 00:18:29.547 "name": "BaseBdev2", 00:18:29.547 "uuid": "d5a6c1fd-c934-46d5-8438-6aa27e1ebdb4", 00:18:29.547 "is_configured": true, 00:18:29.547 "data_offset": 256, 00:18:29.547 "data_size": 7936 00:18:29.547 } 00:18:29.547 ] 00:18:29.547 }' 00:18:29.547 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.547 09:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.808 [2024-11-20 09:30:55.208505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.808 "name": "Existed_Raid", 00:18:29.808 "aliases": [ 00:18:29.808 "ebf6898f-c2f0-44c1-a4d1-4fcefa6aeb27" 00:18:29.808 ], 00:18:29.808 "product_name": "Raid Volume", 00:18:29.808 "block_size": 4096, 00:18:29.808 "num_blocks": 7936, 00:18:29.808 "uuid": "ebf6898f-c2f0-44c1-a4d1-4fcefa6aeb27", 00:18:29.808 "md_size": 32, 00:18:29.808 "md_interleave": false, 00:18:29.808 "dif_type": 0, 00:18:29.808 "assigned_rate_limits": { 00:18:29.808 "rw_ios_per_sec": 0, 00:18:29.808 "rw_mbytes_per_sec": 0, 00:18:29.808 "r_mbytes_per_sec": 0, 00:18:29.808 "w_mbytes_per_sec": 0 00:18:29.808 }, 00:18:29.808 "claimed": false, 00:18:29.808 "zoned": false, 00:18:29.808 "supported_io_types": { 00:18:29.808 "read": true, 00:18:29.808 "write": true, 00:18:29.808 "unmap": false, 00:18:29.808 "flush": false, 00:18:29.808 "reset": true, 00:18:29.808 "nvme_admin": false, 00:18:29.808 "nvme_io": false, 00:18:29.808 "nvme_io_md": false, 00:18:29.808 "write_zeroes": true, 00:18:29.808 "zcopy": false, 00:18:29.808 "get_zone_info": false, 00:18:29.808 "zone_management": false, 00:18:29.808 "zone_append": false, 00:18:29.808 "compare": false, 00:18:29.808 "compare_and_write": false, 00:18:29.808 "abort": false, 00:18:29.808 "seek_hole": false, 00:18:29.808 "seek_data": false, 00:18:29.808 "copy": false, 00:18:29.808 "nvme_iov_md": false 00:18:29.808 }, 00:18:29.808 "memory_domains": [ 00:18:29.808 { 00:18:29.808 "dma_device_id": "system", 00:18:29.808 "dma_device_type": 1 00:18:29.808 }, 00:18:29.808 { 00:18:29.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.808 "dma_device_type": 2 00:18:29.808 }, 00:18:29.808 { 00:18:29.808 "dma_device_id": "system", 00:18:29.808 "dma_device_type": 1 00:18:29.808 }, 00:18:29.808 { 00:18:29.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.808 "dma_device_type": 2 00:18:29.808 } 00:18:29.808 ], 00:18:29.808 "driver_specific": { 00:18:29.808 "raid": { 00:18:29.808 "uuid": "ebf6898f-c2f0-44c1-a4d1-4fcefa6aeb27", 00:18:29.808 "strip_size_kb": 0, 00:18:29.808 "state": "online", 00:18:29.808 "raid_level": "raid1", 00:18:29.808 "superblock": true, 00:18:29.808 "num_base_bdevs": 2, 00:18:29.808 "num_base_bdevs_discovered": 2, 00:18:29.808 "num_base_bdevs_operational": 2, 00:18:29.808 "base_bdevs_list": [ 00:18:29.808 { 00:18:29.808 "name": "BaseBdev1", 00:18:29.808 "uuid": "051e6d45-b345-4e49-97bb-7196ffa19201", 00:18:29.808 "is_configured": true, 00:18:29.808 "data_offset": 256, 00:18:29.808 "data_size": 7936 00:18:29.808 }, 00:18:29.808 { 00:18:29.808 "name": "BaseBdev2", 00:18:29.808 "uuid": "d5a6c1fd-c934-46d5-8438-6aa27e1ebdb4", 00:18:29.808 "is_configured": true, 00:18:29.808 "data_offset": 256, 00:18:29.808 "data_size": 7936 00:18:29.808 } 00:18:29.808 ] 00:18:29.808 } 00:18:29.808 } 00:18:29.808 }' 00:18:29.808 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:30.067 BaseBdev2' 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.067 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.067 [2024-11-20 09:30:55.455740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.326 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.326 "name": "Existed_Raid", 00:18:30.326 "uuid": "ebf6898f-c2f0-44c1-a4d1-4fcefa6aeb27", 00:18:30.326 "strip_size_kb": 0, 00:18:30.326 "state": "online", 00:18:30.326 "raid_level": "raid1", 00:18:30.327 "superblock": true, 00:18:30.327 "num_base_bdevs": 2, 00:18:30.327 "num_base_bdevs_discovered": 1, 00:18:30.327 "num_base_bdevs_operational": 1, 00:18:30.327 "base_bdevs_list": [ 00:18:30.327 { 00:18:30.327 "name": null, 00:18:30.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.327 "is_configured": false, 00:18:30.327 "data_offset": 0, 00:18:30.327 "data_size": 7936 00:18:30.327 }, 00:18:30.327 { 00:18:30.327 "name": "BaseBdev2", 00:18:30.327 "uuid": "d5a6c1fd-c934-46d5-8438-6aa27e1ebdb4", 00:18:30.327 "is_configured": true, 00:18:30.327 "data_offset": 256, 00:18:30.327 "data_size": 7936 00:18:30.327 } 00:18:30.327 ] 00:18:30.327 }' 00:18:30.327 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.327 09:30:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.585 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:30.586 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.845 [2024-11-20 09:30:56.091352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:30.845 [2024-11-20 09:30:56.091590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.845 [2024-11-20 09:30:56.196706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.845 [2024-11-20 09:30:56.196760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.845 [2024-11-20 09:30:56.196772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87649 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87649 ']' 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87649 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87649 00:18:30.845 killing process with pid 87649 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87649' 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87649 00:18:30.845 [2024-11-20 09:30:56.284283] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:30.845 09:30:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87649 00:18:31.104 [2024-11-20 09:30:56.301516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.042 09:30:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:32.042 00:18:32.042 real 0m5.397s 00:18:32.042 user 0m7.774s 00:18:32.042 sys 0m0.977s 00:18:32.042 09:30:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.042 09:30:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.042 ************************************ 00:18:32.042 END TEST raid_state_function_test_sb_md_separate 00:18:32.042 ************************************ 00:18:32.301 09:30:57 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:32.301 09:30:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:32.301 09:30:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.301 09:30:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.301 ************************************ 00:18:32.301 START TEST raid_superblock_test_md_separate 00:18:32.301 ************************************ 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87903 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87903 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87903 ']' 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.301 09:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.301 [2024-11-20 09:30:57.625769] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:32.302 [2024-11-20 09:30:57.626033] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87903 ] 00:18:32.561 [2024-11-20 09:30:57.793474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.561 [2024-11-20 09:30:57.906740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.823 [2024-11-20 09:30:58.117210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.823 [2024-11-20 09:30:58.117326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.086 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.346 malloc1 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.346 [2024-11-20 09:30:58.578598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:33.346 [2024-11-20 09:30:58.578699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.346 [2024-11-20 09:30:58.578744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:33.346 [2024-11-20 09:30:58.578778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.346 [2024-11-20 09:30:58.580748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.346 [2024-11-20 09:30:58.580820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:33.346 pt1 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.346 malloc2 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.346 [2024-11-20 09:30:58.635515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:33.346 [2024-11-20 09:30:58.635615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.346 [2024-11-20 09:30:58.635654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:33.346 [2024-11-20 09:30:58.635682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.346 [2024-11-20 09:30:58.637509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.346 [2024-11-20 09:30:58.637576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:33.346 pt2 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.346 [2024-11-20 09:30:58.647521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:33.346 [2024-11-20 09:30:58.649264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.346 [2024-11-20 09:30:58.649484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:33.346 [2024-11-20 09:30:58.649532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:33.346 [2024-11-20 09:30:58.649628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:33.346 [2024-11-20 09:30:58.649784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:33.346 [2024-11-20 09:30:58.649824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:33.346 [2024-11-20 09:30:58.649964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.346 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.346 "name": "raid_bdev1", 00:18:33.346 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:33.346 "strip_size_kb": 0, 00:18:33.346 "state": "online", 00:18:33.346 "raid_level": "raid1", 00:18:33.346 "superblock": true, 00:18:33.346 "num_base_bdevs": 2, 00:18:33.346 "num_base_bdevs_discovered": 2, 00:18:33.346 "num_base_bdevs_operational": 2, 00:18:33.346 "base_bdevs_list": [ 00:18:33.346 { 00:18:33.346 "name": "pt1", 00:18:33.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:33.346 "is_configured": true, 00:18:33.346 "data_offset": 256, 00:18:33.346 "data_size": 7936 00:18:33.346 }, 00:18:33.346 { 00:18:33.346 "name": "pt2", 00:18:33.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.346 "is_configured": true, 00:18:33.346 "data_offset": 256, 00:18:33.347 "data_size": 7936 00:18:33.347 } 00:18:33.347 ] 00:18:33.347 }' 00:18:33.347 09:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.347 09:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:33.916 [2024-11-20 09:30:59.091034] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.916 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:33.916 "name": "raid_bdev1", 00:18:33.916 "aliases": [ 00:18:33.916 "5da824f3-8edf-4179-b8d6-4720e1047779" 00:18:33.916 ], 00:18:33.916 "product_name": "Raid Volume", 00:18:33.916 "block_size": 4096, 00:18:33.916 "num_blocks": 7936, 00:18:33.916 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:33.916 "md_size": 32, 00:18:33.916 "md_interleave": false, 00:18:33.916 "dif_type": 0, 00:18:33.916 "assigned_rate_limits": { 00:18:33.916 "rw_ios_per_sec": 0, 00:18:33.916 "rw_mbytes_per_sec": 0, 00:18:33.916 "r_mbytes_per_sec": 0, 00:18:33.916 "w_mbytes_per_sec": 0 00:18:33.916 }, 00:18:33.916 "claimed": false, 00:18:33.916 "zoned": false, 00:18:33.916 "supported_io_types": { 00:18:33.916 "read": true, 00:18:33.916 "write": true, 00:18:33.916 "unmap": false, 00:18:33.916 "flush": false, 00:18:33.916 "reset": true, 00:18:33.916 "nvme_admin": false, 00:18:33.916 "nvme_io": false, 00:18:33.917 "nvme_io_md": false, 00:18:33.917 "write_zeroes": true, 00:18:33.917 "zcopy": false, 00:18:33.917 "get_zone_info": false, 00:18:33.917 "zone_management": false, 00:18:33.917 "zone_append": false, 00:18:33.917 "compare": false, 00:18:33.917 "compare_and_write": false, 00:18:33.917 "abort": false, 00:18:33.917 "seek_hole": false, 00:18:33.917 "seek_data": false, 00:18:33.917 "copy": false, 00:18:33.917 "nvme_iov_md": false 00:18:33.917 }, 00:18:33.917 "memory_domains": [ 00:18:33.917 { 00:18:33.917 "dma_device_id": "system", 00:18:33.917 "dma_device_type": 1 00:18:33.917 }, 00:18:33.917 { 00:18:33.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.917 "dma_device_type": 2 00:18:33.917 }, 00:18:33.917 { 00:18:33.917 "dma_device_id": "system", 00:18:33.917 "dma_device_type": 1 00:18:33.917 }, 00:18:33.917 { 00:18:33.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.917 "dma_device_type": 2 00:18:33.917 } 00:18:33.917 ], 00:18:33.917 "driver_specific": { 00:18:33.917 "raid": { 00:18:33.917 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:33.917 "strip_size_kb": 0, 00:18:33.917 "state": "online", 00:18:33.917 "raid_level": "raid1", 00:18:33.917 "superblock": true, 00:18:33.917 "num_base_bdevs": 2, 00:18:33.917 "num_base_bdevs_discovered": 2, 00:18:33.917 "num_base_bdevs_operational": 2, 00:18:33.917 "base_bdevs_list": [ 00:18:33.917 { 00:18:33.917 "name": "pt1", 00:18:33.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:33.917 "is_configured": true, 00:18:33.917 "data_offset": 256, 00:18:33.917 "data_size": 7936 00:18:33.917 }, 00:18:33.917 { 00:18:33.917 "name": "pt2", 00:18:33.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.917 "is_configured": true, 00:18:33.917 "data_offset": 256, 00:18:33.917 "data_size": 7936 00:18:33.917 } 00:18:33.917 ] 00:18:33.917 } 00:18:33.917 } 00:18:33.917 }' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:33.917 pt2' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.917 [2024-11-20 09:30:59.314619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5da824f3-8edf-4179-b8d6-4720e1047779 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 5da824f3-8edf-4179-b8d6-4720e1047779 ']' 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.917 [2024-11-20 09:30:59.362265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.917 [2024-11-20 09:30:59.362352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.917 [2024-11-20 09:30:59.362475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.917 [2024-11-20 09:30:59.362534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.917 [2024-11-20 09:30:59.362546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:33.917 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.181 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.181 [2024-11-20 09:30:59.482091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:34.181 [2024-11-20 09:30:59.484073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:34.181 [2024-11-20 09:30:59.484153] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:34.181 [2024-11-20 09:30:59.484218] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:34.181 [2024-11-20 09:30:59.484235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.181 [2024-11-20 09:30:59.484247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:34.181 request: 00:18:34.181 { 00:18:34.181 "name": "raid_bdev1", 00:18:34.181 "raid_level": "raid1", 00:18:34.181 "base_bdevs": [ 00:18:34.181 "malloc1", 00:18:34.181 "malloc2" 00:18:34.181 ], 00:18:34.181 "superblock": false, 00:18:34.181 "method": "bdev_raid_create", 00:18:34.181 "req_id": 1 00:18:34.181 } 00:18:34.181 Got JSON-RPC error response 00:18:34.182 response: 00:18:34.182 { 00:18:34.182 "code": -17, 00:18:34.182 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:34.182 } 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.182 [2024-11-20 09:30:59.541954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.182 [2024-11-20 09:30:59.542078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.182 [2024-11-20 09:30:59.542114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:34.182 [2024-11-20 09:30:59.542178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.182 [2024-11-20 09:30:59.544153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.182 [2024-11-20 09:30:59.544229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.182 [2024-11-20 09:30:59.544303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:34.182 [2024-11-20 09:30:59.544383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:34.182 pt1 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.182 "name": "raid_bdev1", 00:18:34.182 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:34.182 "strip_size_kb": 0, 00:18:34.182 "state": "configuring", 00:18:34.182 "raid_level": "raid1", 00:18:34.182 "superblock": true, 00:18:34.182 "num_base_bdevs": 2, 00:18:34.182 "num_base_bdevs_discovered": 1, 00:18:34.182 "num_base_bdevs_operational": 2, 00:18:34.182 "base_bdevs_list": [ 00:18:34.182 { 00:18:34.182 "name": "pt1", 00:18:34.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.182 "is_configured": true, 00:18:34.182 "data_offset": 256, 00:18:34.182 "data_size": 7936 00:18:34.182 }, 00:18:34.182 { 00:18:34.182 "name": null, 00:18:34.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.182 "is_configured": false, 00:18:34.182 "data_offset": 256, 00:18:34.182 "data_size": 7936 00:18:34.182 } 00:18:34.182 ] 00:18:34.182 }' 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.182 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.750 [2024-11-20 09:30:59.993303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:34.750 [2024-11-20 09:30:59.993469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.750 [2024-11-20 09:30:59.993512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:34.750 [2024-11-20 09:30:59.993571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.750 [2024-11-20 09:30:59.993816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.750 [2024-11-20 09:30:59.993867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:34.750 [2024-11-20 09:30:59.993944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:34.750 [2024-11-20 09:30:59.993993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:34.750 [2024-11-20 09:30:59.994125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:34.750 [2024-11-20 09:30:59.994163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:34.750 [2024-11-20 09:30:59.994247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:34.750 [2024-11-20 09:30:59.994390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:34.750 [2024-11-20 09:30:59.994424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:34.750 [2024-11-20 09:30:59.994569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.750 pt2 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.750 09:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.750 "name": "raid_bdev1", 00:18:34.750 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:34.750 "strip_size_kb": 0, 00:18:34.750 "state": "online", 00:18:34.750 "raid_level": "raid1", 00:18:34.750 "superblock": true, 00:18:34.750 "num_base_bdevs": 2, 00:18:34.750 "num_base_bdevs_discovered": 2, 00:18:34.750 "num_base_bdevs_operational": 2, 00:18:34.750 "base_bdevs_list": [ 00:18:34.750 { 00:18:34.750 "name": "pt1", 00:18:34.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.750 "is_configured": true, 00:18:34.750 "data_offset": 256, 00:18:34.750 "data_size": 7936 00:18:34.750 }, 00:18:34.750 { 00:18:34.750 "name": "pt2", 00:18:34.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.750 "is_configured": true, 00:18:34.750 "data_offset": 256, 00:18:34.750 "data_size": 7936 00:18:34.750 } 00:18:34.750 ] 00:18:34.750 }' 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.750 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.009 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.009 [2024-11-20 09:31:00.456783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.269 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.269 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:35.269 "name": "raid_bdev1", 00:18:35.269 "aliases": [ 00:18:35.269 "5da824f3-8edf-4179-b8d6-4720e1047779" 00:18:35.269 ], 00:18:35.269 "product_name": "Raid Volume", 00:18:35.269 "block_size": 4096, 00:18:35.269 "num_blocks": 7936, 00:18:35.269 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:35.269 "md_size": 32, 00:18:35.269 "md_interleave": false, 00:18:35.269 "dif_type": 0, 00:18:35.269 "assigned_rate_limits": { 00:18:35.269 "rw_ios_per_sec": 0, 00:18:35.269 "rw_mbytes_per_sec": 0, 00:18:35.269 "r_mbytes_per_sec": 0, 00:18:35.269 "w_mbytes_per_sec": 0 00:18:35.269 }, 00:18:35.269 "claimed": false, 00:18:35.269 "zoned": false, 00:18:35.269 "supported_io_types": { 00:18:35.269 "read": true, 00:18:35.269 "write": true, 00:18:35.269 "unmap": false, 00:18:35.269 "flush": false, 00:18:35.269 "reset": true, 00:18:35.269 "nvme_admin": false, 00:18:35.269 "nvme_io": false, 00:18:35.269 "nvme_io_md": false, 00:18:35.269 "write_zeroes": true, 00:18:35.269 "zcopy": false, 00:18:35.269 "get_zone_info": false, 00:18:35.269 "zone_management": false, 00:18:35.269 "zone_append": false, 00:18:35.269 "compare": false, 00:18:35.269 "compare_and_write": false, 00:18:35.269 "abort": false, 00:18:35.269 "seek_hole": false, 00:18:35.269 "seek_data": false, 00:18:35.269 "copy": false, 00:18:35.269 "nvme_iov_md": false 00:18:35.269 }, 00:18:35.269 "memory_domains": [ 00:18:35.269 { 00:18:35.269 "dma_device_id": "system", 00:18:35.269 "dma_device_type": 1 00:18:35.269 }, 00:18:35.269 { 00:18:35.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.269 "dma_device_type": 2 00:18:35.269 }, 00:18:35.269 { 00:18:35.269 "dma_device_id": "system", 00:18:35.269 "dma_device_type": 1 00:18:35.269 }, 00:18:35.269 { 00:18:35.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.269 "dma_device_type": 2 00:18:35.269 } 00:18:35.269 ], 00:18:35.269 "driver_specific": { 00:18:35.269 "raid": { 00:18:35.269 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:35.269 "strip_size_kb": 0, 00:18:35.269 "state": "online", 00:18:35.269 "raid_level": "raid1", 00:18:35.269 "superblock": true, 00:18:35.269 "num_base_bdevs": 2, 00:18:35.269 "num_base_bdevs_discovered": 2, 00:18:35.269 "num_base_bdevs_operational": 2, 00:18:35.270 "base_bdevs_list": [ 00:18:35.270 { 00:18:35.270 "name": "pt1", 00:18:35.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.270 "is_configured": true, 00:18:35.270 "data_offset": 256, 00:18:35.270 "data_size": 7936 00:18:35.270 }, 00:18:35.270 { 00:18:35.270 "name": "pt2", 00:18:35.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.270 "is_configured": true, 00:18:35.270 "data_offset": 256, 00:18:35.270 "data_size": 7936 00:18:35.270 } 00:18:35.270 ] 00:18:35.270 } 00:18:35.270 } 00:18:35.270 }' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:35.270 pt2' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.270 [2024-11-20 09:31:00.672405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 5da824f3-8edf-4179-b8d6-4720e1047779 '!=' 5da824f3-8edf-4179-b8d6-4720e1047779 ']' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.270 [2024-11-20 09:31:00.708088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.270 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.530 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.530 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.530 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.530 "name": "raid_bdev1", 00:18:35.530 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:35.530 "strip_size_kb": 0, 00:18:35.530 "state": "online", 00:18:35.530 "raid_level": "raid1", 00:18:35.530 "superblock": true, 00:18:35.530 "num_base_bdevs": 2, 00:18:35.530 "num_base_bdevs_discovered": 1, 00:18:35.530 "num_base_bdevs_operational": 1, 00:18:35.530 "base_bdevs_list": [ 00:18:35.530 { 00:18:35.530 "name": null, 00:18:35.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.530 "is_configured": false, 00:18:35.530 "data_offset": 0, 00:18:35.530 "data_size": 7936 00:18:35.530 }, 00:18:35.530 { 00:18:35.530 "name": "pt2", 00:18:35.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.530 "is_configured": true, 00:18:35.530 "data_offset": 256, 00:18:35.530 "data_size": 7936 00:18:35.530 } 00:18:35.530 ] 00:18:35.530 }' 00:18:35.530 09:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.530 09:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.789 [2024-11-20 09:31:01.171305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.789 [2024-11-20 09:31:01.171414] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.789 [2024-11-20 09:31:01.171534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.789 [2024-11-20 09:31:01.171602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.789 [2024-11-20 09:31:01.171667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:35.789 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.790 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.049 [2024-11-20 09:31:01.243214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.049 [2024-11-20 09:31:01.243319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.049 [2024-11-20 09:31:01.243358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:36.049 [2024-11-20 09:31:01.243390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.049 [2024-11-20 09:31:01.245422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.049 [2024-11-20 09:31:01.245511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.049 [2024-11-20 09:31:01.245585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:36.049 [2024-11-20 09:31:01.245648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.049 [2024-11-20 09:31:01.245765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:36.049 [2024-11-20 09:31:01.245804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:36.049 [2024-11-20 09:31:01.245895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:36.049 [2024-11-20 09:31:01.246047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:36.049 [2024-11-20 09:31:01.246083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:36.049 [2024-11-20 09:31:01.246223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.049 pt2 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.049 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.049 "name": "raid_bdev1", 00:18:36.049 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:36.049 "strip_size_kb": 0, 00:18:36.049 "state": "online", 00:18:36.049 "raid_level": "raid1", 00:18:36.049 "superblock": true, 00:18:36.049 "num_base_bdevs": 2, 00:18:36.049 "num_base_bdevs_discovered": 1, 00:18:36.050 "num_base_bdevs_operational": 1, 00:18:36.050 "base_bdevs_list": [ 00:18:36.050 { 00:18:36.050 "name": null, 00:18:36.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.050 "is_configured": false, 00:18:36.050 "data_offset": 256, 00:18:36.050 "data_size": 7936 00:18:36.050 }, 00:18:36.050 { 00:18:36.050 "name": "pt2", 00:18:36.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.050 "is_configured": true, 00:18:36.050 "data_offset": 256, 00:18:36.050 "data_size": 7936 00:18:36.050 } 00:18:36.050 ] 00:18:36.050 }' 00:18:36.050 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.050 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.309 [2024-11-20 09:31:01.690441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.309 [2024-11-20 09:31:01.690506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.309 [2024-11-20 09:31:01.690585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.309 [2024-11-20 09:31:01.690635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.309 [2024-11-20 09:31:01.690644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.309 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.309 [2024-11-20 09:31:01.754359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:36.309 [2024-11-20 09:31:01.754497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.309 [2024-11-20 09:31:01.754524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:36.309 [2024-11-20 09:31:01.754534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.310 [2024-11-20 09:31:01.756666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.310 [2024-11-20 09:31:01.756705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:36.310 [2024-11-20 09:31:01.756762] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:36.310 [2024-11-20 09:31:01.756816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:36.310 [2024-11-20 09:31:01.756971] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:36.310 [2024-11-20 09:31:01.756982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.310 [2024-11-20 09:31:01.757003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:36.310 [2024-11-20 09:31:01.757084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.310 [2024-11-20 09:31:01.757178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:36.310 [2024-11-20 09:31:01.757194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:36.310 [2024-11-20 09:31:01.757277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:36.310 [2024-11-20 09:31:01.757392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:36.310 [2024-11-20 09:31:01.757409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:36.310 [2024-11-20 09:31:01.757543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.310 pt1 00:18:36.310 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.310 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:36.310 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.310 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.310 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.310 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.310 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.569 "name": "raid_bdev1", 00:18:36.569 "uuid": "5da824f3-8edf-4179-b8d6-4720e1047779", 00:18:36.569 "strip_size_kb": 0, 00:18:36.569 "state": "online", 00:18:36.569 "raid_level": "raid1", 00:18:36.569 "superblock": true, 00:18:36.569 "num_base_bdevs": 2, 00:18:36.569 "num_base_bdevs_discovered": 1, 00:18:36.569 "num_base_bdevs_operational": 1, 00:18:36.569 "base_bdevs_list": [ 00:18:36.569 { 00:18:36.569 "name": null, 00:18:36.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.569 "is_configured": false, 00:18:36.569 "data_offset": 256, 00:18:36.569 "data_size": 7936 00:18:36.569 }, 00:18:36.569 { 00:18:36.569 "name": "pt2", 00:18:36.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.569 "is_configured": true, 00:18:36.569 "data_offset": 256, 00:18:36.569 "data_size": 7936 00:18:36.569 } 00:18:36.569 ] 00:18:36.569 }' 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.569 09:31:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.829 09:31:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:37.088 [2024-11-20 09:31:02.285713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 5da824f3-8edf-4179-b8d6-4720e1047779 '!=' 5da824f3-8edf-4179-b8d6-4720e1047779 ']' 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87903 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87903 ']' 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87903 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87903 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.088 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.089 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87903' 00:18:37.089 killing process with pid 87903 00:18:37.089 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87903 00:18:37.089 [2024-11-20 09:31:02.373997] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.089 [2024-11-20 09:31:02.374138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.089 [2024-11-20 09:31:02.374213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.089 09:31:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87903 00:18:37.089 [2024-11-20 09:31:02.374263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:37.349 [2024-11-20 09:31:02.598976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.287 09:31:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:38.287 00:18:38.287 real 0m6.186s 00:18:38.287 user 0m9.360s 00:18:38.287 sys 0m1.109s 00:18:38.287 09:31:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.287 09:31:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.287 ************************************ 00:18:38.287 END TEST raid_superblock_test_md_separate 00:18:38.287 ************************************ 00:18:38.547 09:31:03 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:38.547 09:31:03 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:38.547 09:31:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:38.547 09:31:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.547 09:31:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.547 ************************************ 00:18:38.547 START TEST raid_rebuild_test_sb_md_separate 00:18:38.547 ************************************ 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:38.547 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88231 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88231 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88231 ']' 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.548 09:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.548 [2024-11-20 09:31:03.879956] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:38.548 [2024-11-20 09:31:03.880129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:38.548 Zero copy mechanism will not be used. 00:18:38.548 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88231 ] 00:18:38.807 [2024-11-20 09:31:04.058051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.807 [2024-11-20 09:31:04.174059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.066 [2024-11-20 09:31:04.380726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.066 [2024-11-20 09:31:04.380870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.325 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.326 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:39.326 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.326 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:39.326 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.326 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.585 BaseBdev1_malloc 00:18:39.585 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.585 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:39.585 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.585 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.586 [2024-11-20 09:31:04.808919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:39.586 [2024-11-20 09:31:04.808978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.586 [2024-11-20 09:31:04.809000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:39.586 [2024-11-20 09:31:04.809011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.586 [2024-11-20 09:31:04.810844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.586 [2024-11-20 09:31:04.810967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:39.586 BaseBdev1 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.586 BaseBdev2_malloc 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.586 [2024-11-20 09:31:04.863662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:39.586 [2024-11-20 09:31:04.863726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.586 [2024-11-20 09:31:04.863745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:39.586 [2024-11-20 09:31:04.863754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.586 [2024-11-20 09:31:04.865553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.586 [2024-11-20 09:31:04.865590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:39.586 BaseBdev2 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.586 spare_malloc 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.586 spare_delay 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.586 [2024-11-20 09:31:04.947910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:39.586 [2024-11-20 09:31:04.947969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.586 [2024-11-20 09:31:04.948005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:39.586 [2024-11-20 09:31:04.948016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.586 [2024-11-20 09:31:04.949816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.586 [2024-11-20 09:31:04.949853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:39.586 spare 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.586 [2024-11-20 09:31:04.959924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.586 [2024-11-20 09:31:04.961722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.586 [2024-11-20 09:31:04.961888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:39.586 [2024-11-20 09:31:04.961904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:39.586 [2024-11-20 09:31:04.961966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:39.586 [2024-11-20 09:31:04.962087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:39.586 [2024-11-20 09:31:04.962094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:39.586 [2024-11-20 09:31:04.962181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.586 09:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.586 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.586 "name": "raid_bdev1", 00:18:39.586 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:39.586 "strip_size_kb": 0, 00:18:39.586 "state": "online", 00:18:39.586 "raid_level": "raid1", 00:18:39.586 "superblock": true, 00:18:39.586 "num_base_bdevs": 2, 00:18:39.586 "num_base_bdevs_discovered": 2, 00:18:39.586 "num_base_bdevs_operational": 2, 00:18:39.586 "base_bdevs_list": [ 00:18:39.586 { 00:18:39.586 "name": "BaseBdev1", 00:18:39.586 "uuid": "744ed0e4-927d-555d-b254-10ea4b2fc658", 00:18:39.586 "is_configured": true, 00:18:39.586 "data_offset": 256, 00:18:39.586 "data_size": 7936 00:18:39.586 }, 00:18:39.586 { 00:18:39.586 "name": "BaseBdev2", 00:18:39.586 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:39.586 "is_configured": true, 00:18:39.586 "data_offset": 256, 00:18:39.586 "data_size": 7936 00:18:39.586 } 00:18:39.586 ] 00:18:39.586 }' 00:18:39.586 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.586 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.154 [2024-11-20 09:31:05.427501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.154 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:40.414 [2024-11-20 09:31:05.714740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:40.414 /dev/nbd0 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.414 1+0 records in 00:18:40.414 1+0 records out 00:18:40.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400802 s, 10.2 MB/s 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:40.414 09:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:40.981 7936+0 records in 00:18:40.981 7936+0 records out 00:18:40.982 32505856 bytes (33 MB, 31 MiB) copied, 0.637594 s, 51.0 MB/s 00:18:40.982 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:40.982 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.982 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.982 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.982 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:40.982 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.982 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:41.270 [2024-11-20 09:31:06.648876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.270 [2024-11-20 09:31:06.668303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.270 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.271 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.554 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.554 "name": "raid_bdev1", 00:18:41.554 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:41.554 "strip_size_kb": 0, 00:18:41.554 "state": "online", 00:18:41.554 "raid_level": "raid1", 00:18:41.554 "superblock": true, 00:18:41.554 "num_base_bdevs": 2, 00:18:41.554 "num_base_bdevs_discovered": 1, 00:18:41.554 "num_base_bdevs_operational": 1, 00:18:41.554 "base_bdevs_list": [ 00:18:41.554 { 00:18:41.554 "name": null, 00:18:41.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.554 "is_configured": false, 00:18:41.554 "data_offset": 0, 00:18:41.554 "data_size": 7936 00:18:41.554 }, 00:18:41.554 { 00:18:41.554 "name": "BaseBdev2", 00:18:41.554 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:41.554 "is_configured": true, 00:18:41.554 "data_offset": 256, 00:18:41.554 "data_size": 7936 00:18:41.554 } 00:18:41.554 ] 00:18:41.554 }' 00:18:41.554 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.554 09:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.813 09:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.813 09:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.813 09:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.813 [2024-11-20 09:31:07.119554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.813 [2024-11-20 09:31:07.133781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:41.813 09:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.813 09:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:41.813 [2024-11-20 09:31:07.135541] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.751 "name": "raid_bdev1", 00:18:42.751 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:42.751 "strip_size_kb": 0, 00:18:42.751 "state": "online", 00:18:42.751 "raid_level": "raid1", 00:18:42.751 "superblock": true, 00:18:42.751 "num_base_bdevs": 2, 00:18:42.751 "num_base_bdevs_discovered": 2, 00:18:42.751 "num_base_bdevs_operational": 2, 00:18:42.751 "process": { 00:18:42.751 "type": "rebuild", 00:18:42.751 "target": "spare", 00:18:42.751 "progress": { 00:18:42.751 "blocks": 2560, 00:18:42.751 "percent": 32 00:18:42.751 } 00:18:42.751 }, 00:18:42.751 "base_bdevs_list": [ 00:18:42.751 { 00:18:42.751 "name": "spare", 00:18:42.751 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:42.751 "is_configured": true, 00:18:42.751 "data_offset": 256, 00:18:42.751 "data_size": 7936 00:18:42.751 }, 00:18:42.751 { 00:18:42.751 "name": "BaseBdev2", 00:18:42.751 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:42.751 "is_configured": true, 00:18:42.751 "data_offset": 256, 00:18:42.751 "data_size": 7936 00:18:42.751 } 00:18:42.751 ] 00:18:42.751 }' 00:18:42.751 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.011 [2024-11-20 09:31:08.287589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.011 [2024-11-20 09:31:08.341285] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:43.011 [2024-11-20 09:31:08.341371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.011 [2024-11-20 09:31:08.341386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.011 [2024-11-20 09:31:08.341395] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.011 "name": "raid_bdev1", 00:18:43.011 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:43.011 "strip_size_kb": 0, 00:18:43.011 "state": "online", 00:18:43.011 "raid_level": "raid1", 00:18:43.011 "superblock": true, 00:18:43.011 "num_base_bdevs": 2, 00:18:43.011 "num_base_bdevs_discovered": 1, 00:18:43.011 "num_base_bdevs_operational": 1, 00:18:43.011 "base_bdevs_list": [ 00:18:43.011 { 00:18:43.011 "name": null, 00:18:43.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.011 "is_configured": false, 00:18:43.011 "data_offset": 0, 00:18:43.011 "data_size": 7936 00:18:43.011 }, 00:18:43.011 { 00:18:43.011 "name": "BaseBdev2", 00:18:43.011 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:43.011 "is_configured": true, 00:18:43.011 "data_offset": 256, 00:18:43.011 "data_size": 7936 00:18:43.011 } 00:18:43.011 ] 00:18:43.011 }' 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.011 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.581 "name": "raid_bdev1", 00:18:43.581 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:43.581 "strip_size_kb": 0, 00:18:43.581 "state": "online", 00:18:43.581 "raid_level": "raid1", 00:18:43.581 "superblock": true, 00:18:43.581 "num_base_bdevs": 2, 00:18:43.581 "num_base_bdevs_discovered": 1, 00:18:43.581 "num_base_bdevs_operational": 1, 00:18:43.581 "base_bdevs_list": [ 00:18:43.581 { 00:18:43.581 "name": null, 00:18:43.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.581 "is_configured": false, 00:18:43.581 "data_offset": 0, 00:18:43.581 "data_size": 7936 00:18:43.581 }, 00:18:43.581 { 00:18:43.581 "name": "BaseBdev2", 00:18:43.581 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:43.581 "is_configured": true, 00:18:43.581 "data_offset": 256, 00:18:43.581 "data_size": 7936 00:18:43.581 } 00:18:43.581 ] 00:18:43.581 }' 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.581 [2024-11-20 09:31:08.973419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.581 [2024-11-20 09:31:08.988113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.581 09:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:43.581 [2024-11-20 09:31:08.990031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:44.966 09:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.966 09:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.966 09:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.966 09:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.966 09:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.966 09:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.966 09:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.966 09:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.966 09:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.966 "name": "raid_bdev1", 00:18:44.966 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:44.966 "strip_size_kb": 0, 00:18:44.966 "state": "online", 00:18:44.966 "raid_level": "raid1", 00:18:44.966 "superblock": true, 00:18:44.966 "num_base_bdevs": 2, 00:18:44.966 "num_base_bdevs_discovered": 2, 00:18:44.966 "num_base_bdevs_operational": 2, 00:18:44.966 "process": { 00:18:44.966 "type": "rebuild", 00:18:44.966 "target": "spare", 00:18:44.966 "progress": { 00:18:44.966 "blocks": 2560, 00:18:44.966 "percent": 32 00:18:44.966 } 00:18:44.966 }, 00:18:44.966 "base_bdevs_list": [ 00:18:44.966 { 00:18:44.966 "name": "spare", 00:18:44.966 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:44.966 "is_configured": true, 00:18:44.966 "data_offset": 256, 00:18:44.966 "data_size": 7936 00:18:44.966 }, 00:18:44.966 { 00:18:44.966 "name": "BaseBdev2", 00:18:44.966 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:44.966 "is_configured": true, 00:18:44.966 "data_offset": 256, 00:18:44.966 "data_size": 7936 00:18:44.966 } 00:18:44.966 ] 00:18:44.966 }' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:44.966 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=745 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.966 "name": "raid_bdev1", 00:18:44.966 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:44.966 "strip_size_kb": 0, 00:18:44.966 "state": "online", 00:18:44.966 "raid_level": "raid1", 00:18:44.966 "superblock": true, 00:18:44.966 "num_base_bdevs": 2, 00:18:44.966 "num_base_bdevs_discovered": 2, 00:18:44.966 "num_base_bdevs_operational": 2, 00:18:44.966 "process": { 00:18:44.966 "type": "rebuild", 00:18:44.966 "target": "spare", 00:18:44.966 "progress": { 00:18:44.966 "blocks": 2816, 00:18:44.966 "percent": 35 00:18:44.966 } 00:18:44.966 }, 00:18:44.966 "base_bdevs_list": [ 00:18:44.966 { 00:18:44.966 "name": "spare", 00:18:44.966 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:44.966 "is_configured": true, 00:18:44.966 "data_offset": 256, 00:18:44.966 "data_size": 7936 00:18:44.966 }, 00:18:44.966 { 00:18:44.966 "name": "BaseBdev2", 00:18:44.966 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:44.966 "is_configured": true, 00:18:44.966 "data_offset": 256, 00:18:44.966 "data_size": 7936 00:18:44.966 } 00:18:44.966 ] 00:18:44.966 }' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.966 09:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.910 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.910 "name": "raid_bdev1", 00:18:45.910 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:45.910 "strip_size_kb": 0, 00:18:45.910 "state": "online", 00:18:45.910 "raid_level": "raid1", 00:18:45.910 "superblock": true, 00:18:45.910 "num_base_bdevs": 2, 00:18:45.910 "num_base_bdevs_discovered": 2, 00:18:45.910 "num_base_bdevs_operational": 2, 00:18:45.910 "process": { 00:18:45.910 "type": "rebuild", 00:18:45.910 "target": "spare", 00:18:45.910 "progress": { 00:18:45.910 "blocks": 5888, 00:18:45.910 "percent": 74 00:18:45.910 } 00:18:45.910 }, 00:18:45.910 "base_bdevs_list": [ 00:18:45.910 { 00:18:45.910 "name": "spare", 00:18:45.910 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:45.910 "is_configured": true, 00:18:45.910 "data_offset": 256, 00:18:45.910 "data_size": 7936 00:18:45.910 }, 00:18:45.910 { 00:18:45.910 "name": "BaseBdev2", 00:18:45.910 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:45.910 "is_configured": true, 00:18:45.911 "data_offset": 256, 00:18:45.911 "data_size": 7936 00:18:45.911 } 00:18:45.911 ] 00:18:45.911 }' 00:18:45.911 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.169 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.169 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.169 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.169 09:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.736 [2024-11-20 09:31:12.103808] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:46.736 [2024-11-20 09:31:12.103976] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:46.736 [2024-11-20 09:31:12.104085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.305 "name": "raid_bdev1", 00:18:47.305 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:47.305 "strip_size_kb": 0, 00:18:47.305 "state": "online", 00:18:47.305 "raid_level": "raid1", 00:18:47.305 "superblock": true, 00:18:47.305 "num_base_bdevs": 2, 00:18:47.305 "num_base_bdevs_discovered": 2, 00:18:47.305 "num_base_bdevs_operational": 2, 00:18:47.305 "base_bdevs_list": [ 00:18:47.305 { 00:18:47.305 "name": "spare", 00:18:47.305 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:47.305 "is_configured": true, 00:18:47.305 "data_offset": 256, 00:18:47.305 "data_size": 7936 00:18:47.305 }, 00:18:47.305 { 00:18:47.305 "name": "BaseBdev2", 00:18:47.305 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:47.305 "is_configured": true, 00:18:47.305 "data_offset": 256, 00:18:47.305 "data_size": 7936 00:18:47.305 } 00:18:47.305 ] 00:18:47.305 }' 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.305 "name": "raid_bdev1", 00:18:47.305 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:47.305 "strip_size_kb": 0, 00:18:47.305 "state": "online", 00:18:47.305 "raid_level": "raid1", 00:18:47.305 "superblock": true, 00:18:47.305 "num_base_bdevs": 2, 00:18:47.305 "num_base_bdevs_discovered": 2, 00:18:47.305 "num_base_bdevs_operational": 2, 00:18:47.305 "base_bdevs_list": [ 00:18:47.305 { 00:18:47.305 "name": "spare", 00:18:47.305 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:47.305 "is_configured": true, 00:18:47.305 "data_offset": 256, 00:18:47.305 "data_size": 7936 00:18:47.305 }, 00:18:47.305 { 00:18:47.305 "name": "BaseBdev2", 00:18:47.305 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:47.305 "is_configured": true, 00:18:47.305 "data_offset": 256, 00:18:47.305 "data_size": 7936 00:18:47.305 } 00:18:47.305 ] 00:18:47.305 }' 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.305 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.306 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.565 "name": "raid_bdev1", 00:18:47.565 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:47.565 "strip_size_kb": 0, 00:18:47.565 "state": "online", 00:18:47.565 "raid_level": "raid1", 00:18:47.565 "superblock": true, 00:18:47.565 "num_base_bdevs": 2, 00:18:47.565 "num_base_bdevs_discovered": 2, 00:18:47.565 "num_base_bdevs_operational": 2, 00:18:47.565 "base_bdevs_list": [ 00:18:47.565 { 00:18:47.565 "name": "spare", 00:18:47.565 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:47.565 "is_configured": true, 00:18:47.565 "data_offset": 256, 00:18:47.565 "data_size": 7936 00:18:47.565 }, 00:18:47.565 { 00:18:47.565 "name": "BaseBdev2", 00:18:47.565 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:47.565 "is_configured": true, 00:18:47.565 "data_offset": 256, 00:18:47.565 "data_size": 7936 00:18:47.565 } 00:18:47.565 ] 00:18:47.565 }' 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.565 09:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.824 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.824 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.824 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.824 [2024-11-20 09:31:13.246590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.824 [2024-11-20 09:31:13.246640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.824 [2024-11-20 09:31:13.246730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.824 [2024-11-20 09:31:13.246796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.824 [2024-11-20 09:31:13.246806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:47.824 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.824 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:47.824 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.824 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.824 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.824 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.082 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:48.082 /dev/nbd0 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.341 1+0 records in 00:18:48.341 1+0 records out 00:18:48.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473069 s, 8.7 MB/s 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.341 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:48.601 /dev/nbd1 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.601 1+0 records in 00:18:48.601 1+0 records out 00:18:48.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284726 s, 14.4 MB/s 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.601 09:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:48.601 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:48.601 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.601 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:48.601 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.601 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:48.601 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.601 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.860 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.118 [2024-11-20 09:31:14.517470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.118 [2024-11-20 09:31:14.517535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.118 [2024-11-20 09:31:14.517558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:49.118 [2024-11-20 09:31:14.517570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.118 [2024-11-20 09:31:14.519648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.118 [2024-11-20 09:31:14.519687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.118 [2024-11-20 09:31:14.519750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:49.118 [2024-11-20 09:31:14.519800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.118 [2024-11-20 09:31:14.519928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:49.118 spare 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.118 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.377 [2024-11-20 09:31:14.619820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:49.377 [2024-11-20 09:31:14.619953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:49.377 [2024-11-20 09:31:14.620095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:49.377 [2024-11-20 09:31:14.620266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:49.377 [2024-11-20 09:31:14.620276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:49.377 [2024-11-20 09:31:14.620422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.377 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.377 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:49.377 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.378 "name": "raid_bdev1", 00:18:49.378 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:49.378 "strip_size_kb": 0, 00:18:49.378 "state": "online", 00:18:49.378 "raid_level": "raid1", 00:18:49.378 "superblock": true, 00:18:49.378 "num_base_bdevs": 2, 00:18:49.378 "num_base_bdevs_discovered": 2, 00:18:49.378 "num_base_bdevs_operational": 2, 00:18:49.378 "base_bdevs_list": [ 00:18:49.378 { 00:18:49.378 "name": "spare", 00:18:49.378 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:49.378 "is_configured": true, 00:18:49.378 "data_offset": 256, 00:18:49.378 "data_size": 7936 00:18:49.378 }, 00:18:49.378 { 00:18:49.378 "name": "BaseBdev2", 00:18:49.378 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:49.378 "is_configured": true, 00:18:49.378 "data_offset": 256, 00:18:49.378 "data_size": 7936 00:18:49.378 } 00:18:49.378 ] 00:18:49.378 }' 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.378 09:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.944 "name": "raid_bdev1", 00:18:49.944 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:49.944 "strip_size_kb": 0, 00:18:49.944 "state": "online", 00:18:49.944 "raid_level": "raid1", 00:18:49.944 "superblock": true, 00:18:49.944 "num_base_bdevs": 2, 00:18:49.944 "num_base_bdevs_discovered": 2, 00:18:49.944 "num_base_bdevs_operational": 2, 00:18:49.944 "base_bdevs_list": [ 00:18:49.944 { 00:18:49.944 "name": "spare", 00:18:49.944 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:49.944 "is_configured": true, 00:18:49.944 "data_offset": 256, 00:18:49.944 "data_size": 7936 00:18:49.944 }, 00:18:49.944 { 00:18:49.944 "name": "BaseBdev2", 00:18:49.944 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:49.944 "is_configured": true, 00:18:49.944 "data_offset": 256, 00:18:49.944 "data_size": 7936 00:18:49.944 } 00:18:49.944 ] 00:18:49.944 }' 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.944 [2024-11-20 09:31:15.284249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.944 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.945 "name": "raid_bdev1", 00:18:49.945 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:49.945 "strip_size_kb": 0, 00:18:49.945 "state": "online", 00:18:49.945 "raid_level": "raid1", 00:18:49.945 "superblock": true, 00:18:49.945 "num_base_bdevs": 2, 00:18:49.945 "num_base_bdevs_discovered": 1, 00:18:49.945 "num_base_bdevs_operational": 1, 00:18:49.945 "base_bdevs_list": [ 00:18:49.945 { 00:18:49.945 "name": null, 00:18:49.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.945 "is_configured": false, 00:18:49.945 "data_offset": 0, 00:18:49.945 "data_size": 7936 00:18:49.945 }, 00:18:49.945 { 00:18:49.945 "name": "BaseBdev2", 00:18:49.945 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:49.945 "is_configured": true, 00:18:49.945 "data_offset": 256, 00:18:49.945 "data_size": 7936 00:18:49.945 } 00:18:49.945 ] 00:18:49.945 }' 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.945 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.514 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.514 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.514 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.514 [2024-11-20 09:31:15.715542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.514 [2024-11-20 09:31:15.715855] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.514 [2024-11-20 09:31:15.715921] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:50.514 [2024-11-20 09:31:15.716002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.514 [2024-11-20 09:31:15.730259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:50.514 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.514 09:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:50.514 [2024-11-20 09:31:15.732180] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.450 "name": "raid_bdev1", 00:18:51.450 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:51.450 "strip_size_kb": 0, 00:18:51.450 "state": "online", 00:18:51.450 "raid_level": "raid1", 00:18:51.450 "superblock": true, 00:18:51.450 "num_base_bdevs": 2, 00:18:51.450 "num_base_bdevs_discovered": 2, 00:18:51.450 "num_base_bdevs_operational": 2, 00:18:51.450 "process": { 00:18:51.450 "type": "rebuild", 00:18:51.450 "target": "spare", 00:18:51.450 "progress": { 00:18:51.450 "blocks": 2560, 00:18:51.450 "percent": 32 00:18:51.450 } 00:18:51.450 }, 00:18:51.450 "base_bdevs_list": [ 00:18:51.450 { 00:18:51.450 "name": "spare", 00:18:51.450 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:51.450 "is_configured": true, 00:18:51.450 "data_offset": 256, 00:18:51.450 "data_size": 7936 00:18:51.450 }, 00:18:51.450 { 00:18:51.450 "name": "BaseBdev2", 00:18:51.450 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:51.450 "is_configured": true, 00:18:51.450 "data_offset": 256, 00:18:51.450 "data_size": 7936 00:18:51.450 } 00:18:51.450 ] 00:18:51.450 }' 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.450 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.450 [2024-11-20 09:31:16.896096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.709 [2024-11-20 09:31:16.937712] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.709 [2024-11-20 09:31:16.937781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.709 [2024-11-20 09:31:16.937795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.709 [2024-11-20 09:31:16.937816] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.709 09:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.709 09:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.709 "name": "raid_bdev1", 00:18:51.709 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:51.709 "strip_size_kb": 0, 00:18:51.709 "state": "online", 00:18:51.709 "raid_level": "raid1", 00:18:51.709 "superblock": true, 00:18:51.709 "num_base_bdevs": 2, 00:18:51.709 "num_base_bdevs_discovered": 1, 00:18:51.709 "num_base_bdevs_operational": 1, 00:18:51.709 "base_bdevs_list": [ 00:18:51.709 { 00:18:51.709 "name": null, 00:18:51.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.709 "is_configured": false, 00:18:51.709 "data_offset": 0, 00:18:51.709 "data_size": 7936 00:18:51.709 }, 00:18:51.709 { 00:18:51.709 "name": "BaseBdev2", 00:18:51.709 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:51.709 "is_configured": true, 00:18:51.709 "data_offset": 256, 00:18:51.709 "data_size": 7936 00:18:51.709 } 00:18:51.709 ] 00:18:51.709 }' 00:18:51.709 09:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.709 09:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.275 09:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:52.275 09:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.275 09:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.275 [2024-11-20 09:31:17.445561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:52.275 [2024-11-20 09:31:17.445736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.275 [2024-11-20 09:31:17.445780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:52.275 [2024-11-20 09:31:17.445813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.275 [2024-11-20 09:31:17.446098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.275 [2024-11-20 09:31:17.446156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:52.275 [2024-11-20 09:31:17.446244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:52.275 [2024-11-20 09:31:17.446286] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:52.275 [2024-11-20 09:31:17.446326] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:52.275 [2024-11-20 09:31:17.446411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.275 [2024-11-20 09:31:17.460834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:52.275 spare 00:18:52.275 09:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.275 [2024-11-20 09:31:17.462753] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.275 09:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.213 "name": "raid_bdev1", 00:18:53.213 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:53.213 "strip_size_kb": 0, 00:18:53.213 "state": "online", 00:18:53.213 "raid_level": "raid1", 00:18:53.213 "superblock": true, 00:18:53.213 "num_base_bdevs": 2, 00:18:53.213 "num_base_bdevs_discovered": 2, 00:18:53.213 "num_base_bdevs_operational": 2, 00:18:53.213 "process": { 00:18:53.213 "type": "rebuild", 00:18:53.213 "target": "spare", 00:18:53.213 "progress": { 00:18:53.213 "blocks": 2560, 00:18:53.213 "percent": 32 00:18:53.213 } 00:18:53.213 }, 00:18:53.213 "base_bdevs_list": [ 00:18:53.213 { 00:18:53.213 "name": "spare", 00:18:53.213 "uuid": "9c8028e1-5042-5abc-a3ab-fa814cff36e3", 00:18:53.213 "is_configured": true, 00:18:53.213 "data_offset": 256, 00:18:53.213 "data_size": 7936 00:18:53.213 }, 00:18:53.213 { 00:18:53.213 "name": "BaseBdev2", 00:18:53.213 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:53.213 "is_configured": true, 00:18:53.213 "data_offset": 256, 00:18:53.213 "data_size": 7936 00:18:53.213 } 00:18:53.213 ] 00:18:53.213 }' 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.213 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.213 [2024-11-20 09:31:18.614900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.473 [2024-11-20 09:31:18.668516] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:53.473 [2024-11-20 09:31:18.668584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.473 [2024-11-20 09:31:18.668602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.473 [2024-11-20 09:31:18.668608] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.473 "name": "raid_bdev1", 00:18:53.473 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:53.473 "strip_size_kb": 0, 00:18:53.473 "state": "online", 00:18:53.473 "raid_level": "raid1", 00:18:53.473 "superblock": true, 00:18:53.473 "num_base_bdevs": 2, 00:18:53.473 "num_base_bdevs_discovered": 1, 00:18:53.473 "num_base_bdevs_operational": 1, 00:18:53.473 "base_bdevs_list": [ 00:18:53.473 { 00:18:53.473 "name": null, 00:18:53.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.473 "is_configured": false, 00:18:53.473 "data_offset": 0, 00:18:53.473 "data_size": 7936 00:18:53.473 }, 00:18:53.473 { 00:18:53.473 "name": "BaseBdev2", 00:18:53.473 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:53.473 "is_configured": true, 00:18:53.473 "data_offset": 256, 00:18:53.473 "data_size": 7936 00:18:53.473 } 00:18:53.473 ] 00:18:53.473 }' 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.473 09:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.733 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.000 "name": "raid_bdev1", 00:18:54.000 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:54.000 "strip_size_kb": 0, 00:18:54.000 "state": "online", 00:18:54.000 "raid_level": "raid1", 00:18:54.000 "superblock": true, 00:18:54.000 "num_base_bdevs": 2, 00:18:54.000 "num_base_bdevs_discovered": 1, 00:18:54.000 "num_base_bdevs_operational": 1, 00:18:54.000 "base_bdevs_list": [ 00:18:54.000 { 00:18:54.000 "name": null, 00:18:54.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.000 "is_configured": false, 00:18:54.000 "data_offset": 0, 00:18:54.000 "data_size": 7936 00:18:54.000 }, 00:18:54.000 { 00:18:54.000 "name": "BaseBdev2", 00:18:54.000 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:54.000 "is_configured": true, 00:18:54.000 "data_offset": 256, 00:18:54.000 "data_size": 7936 00:18:54.000 } 00:18:54.000 ] 00:18:54.000 }' 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.000 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:54.001 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.001 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.001 [2024-11-20 09:31:19.312794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:54.001 [2024-11-20 09:31:19.312963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.001 [2024-11-20 09:31:19.312995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:54.001 [2024-11-20 09:31:19.313005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.001 [2024-11-20 09:31:19.313237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.001 [2024-11-20 09:31:19.313250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:54.001 [2024-11-20 09:31:19.313305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:54.001 [2024-11-20 09:31:19.313317] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:54.001 [2024-11-20 09:31:19.313327] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:54.001 [2024-11-20 09:31:19.313337] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:54.001 BaseBdev1 00:18:54.001 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.001 09:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.964 "name": "raid_bdev1", 00:18:54.964 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:54.964 "strip_size_kb": 0, 00:18:54.964 "state": "online", 00:18:54.964 "raid_level": "raid1", 00:18:54.964 "superblock": true, 00:18:54.964 "num_base_bdevs": 2, 00:18:54.964 "num_base_bdevs_discovered": 1, 00:18:54.964 "num_base_bdevs_operational": 1, 00:18:54.964 "base_bdevs_list": [ 00:18:54.964 { 00:18:54.964 "name": null, 00:18:54.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.964 "is_configured": false, 00:18:54.964 "data_offset": 0, 00:18:54.964 "data_size": 7936 00:18:54.964 }, 00:18:54.964 { 00:18:54.964 "name": "BaseBdev2", 00:18:54.964 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:54.964 "is_configured": true, 00:18:54.964 "data_offset": 256, 00:18:54.964 "data_size": 7936 00:18:54.964 } 00:18:54.964 ] 00:18:54.964 }' 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.964 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.531 "name": "raid_bdev1", 00:18:55.531 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:55.531 "strip_size_kb": 0, 00:18:55.531 "state": "online", 00:18:55.531 "raid_level": "raid1", 00:18:55.531 "superblock": true, 00:18:55.531 "num_base_bdevs": 2, 00:18:55.531 "num_base_bdevs_discovered": 1, 00:18:55.531 "num_base_bdevs_operational": 1, 00:18:55.531 "base_bdevs_list": [ 00:18:55.531 { 00:18:55.531 "name": null, 00:18:55.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.531 "is_configured": false, 00:18:55.531 "data_offset": 0, 00:18:55.531 "data_size": 7936 00:18:55.531 }, 00:18:55.531 { 00:18:55.531 "name": "BaseBdev2", 00:18:55.531 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:55.531 "is_configured": true, 00:18:55.531 "data_offset": 256, 00:18:55.531 "data_size": 7936 00:18:55.531 } 00:18:55.531 ] 00:18:55.531 }' 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.531 [2024-11-20 09:31:20.938512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.531 [2024-11-20 09:31:20.938786] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:55.531 [2024-11-20 09:31:20.938849] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:55.531 request: 00:18:55.531 { 00:18:55.531 "base_bdev": "BaseBdev1", 00:18:55.531 "raid_bdev": "raid_bdev1", 00:18:55.531 "method": "bdev_raid_add_base_bdev", 00:18:55.531 "req_id": 1 00:18:55.531 } 00:18:55.531 Got JSON-RPC error response 00:18:55.531 response: 00:18:55.531 { 00:18:55.531 "code": -22, 00:18:55.531 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:55.531 } 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.531 09:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.909 09:31:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.909 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.909 "name": "raid_bdev1", 00:18:56.909 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:56.909 "strip_size_kb": 0, 00:18:56.909 "state": "online", 00:18:56.909 "raid_level": "raid1", 00:18:56.909 "superblock": true, 00:18:56.909 "num_base_bdevs": 2, 00:18:56.909 "num_base_bdevs_discovered": 1, 00:18:56.909 "num_base_bdevs_operational": 1, 00:18:56.909 "base_bdevs_list": [ 00:18:56.909 { 00:18:56.909 "name": null, 00:18:56.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.909 "is_configured": false, 00:18:56.909 "data_offset": 0, 00:18:56.909 "data_size": 7936 00:18:56.909 }, 00:18:56.909 { 00:18:56.909 "name": "BaseBdev2", 00:18:56.909 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:56.909 "is_configured": true, 00:18:56.909 "data_offset": 256, 00:18:56.909 "data_size": 7936 00:18:56.909 } 00:18:56.909 ] 00:18:56.909 }' 00:18:56.909 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.909 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.169 "name": "raid_bdev1", 00:18:57.169 "uuid": "ec78df8b-9386-4e37-a5b4-6831bd11f7f1", 00:18:57.169 "strip_size_kb": 0, 00:18:57.169 "state": "online", 00:18:57.169 "raid_level": "raid1", 00:18:57.169 "superblock": true, 00:18:57.169 "num_base_bdevs": 2, 00:18:57.169 "num_base_bdevs_discovered": 1, 00:18:57.169 "num_base_bdevs_operational": 1, 00:18:57.169 "base_bdevs_list": [ 00:18:57.169 { 00:18:57.169 "name": null, 00:18:57.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.169 "is_configured": false, 00:18:57.169 "data_offset": 0, 00:18:57.169 "data_size": 7936 00:18:57.169 }, 00:18:57.169 { 00:18:57.169 "name": "BaseBdev2", 00:18:57.169 "uuid": "62ed3a32-5e81-5f6a-80f4-cabd190b01e9", 00:18:57.169 "is_configured": true, 00:18:57.169 "data_offset": 256, 00:18:57.169 "data_size": 7936 00:18:57.169 } 00:18:57.169 ] 00:18:57.169 }' 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88231 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88231 ']' 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88231 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88231 00:18:57.169 killing process with pid 88231 00:18:57.169 Received shutdown signal, test time was about 60.000000 seconds 00:18:57.169 00:18:57.169 Latency(us) 00:18:57.169 [2024-11-20T09:31:22.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.169 [2024-11-20T09:31:22.625Z] =================================================================================================================== 00:18:57.169 [2024-11-20T09:31:22.625Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88231' 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88231 00:18:57.169 [2024-11-20 09:31:22.554496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.169 [2024-11-20 09:31:22.554637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.169 09:31:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88231 00:18:57.169 [2024-11-20 09:31:22.554687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.169 [2024-11-20 09:31:22.554699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:57.737 [2024-11-20 09:31:22.881963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.676 ************************************ 00:18:58.676 END TEST raid_rebuild_test_sb_md_separate 00:18:58.676 ************************************ 00:18:58.676 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:58.676 00:18:58.676 real 0m20.221s 00:18:58.676 user 0m26.389s 00:18:58.676 sys 0m2.777s 00:18:58.676 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.676 09:31:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.676 09:31:24 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:58.676 09:31:24 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:58.676 09:31:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:58.676 09:31:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.676 09:31:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.676 ************************************ 00:18:58.676 START TEST raid_state_function_test_sb_md_interleaved 00:18:58.676 ************************************ 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88924 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88924' 00:18:58.676 Process raid pid: 88924 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88924 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88924 ']' 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.676 09:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.935 [2024-11-20 09:31:24.171712] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:18:58.935 [2024-11-20 09:31:24.171850] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.935 [2024-11-20 09:31:24.347988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.195 [2024-11-20 09:31:24.465549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.454 [2024-11-20 09:31:24.668957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.454 [2024-11-20 09:31:24.668999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.714 [2024-11-20 09:31:25.026476] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.714 [2024-11-20 09:31:25.026530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.714 [2024-11-20 09:31:25.026541] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.714 [2024-11-20 09:31:25.026551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.714 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.714 "name": "Existed_Raid", 00:18:59.715 "uuid": "a5312359-a52d-4fbc-ba4b-f61f9afcf1ea", 00:18:59.715 "strip_size_kb": 0, 00:18:59.715 "state": "configuring", 00:18:59.715 "raid_level": "raid1", 00:18:59.715 "superblock": true, 00:18:59.715 "num_base_bdevs": 2, 00:18:59.715 "num_base_bdevs_discovered": 0, 00:18:59.715 "num_base_bdevs_operational": 2, 00:18:59.715 "base_bdevs_list": [ 00:18:59.715 { 00:18:59.715 "name": "BaseBdev1", 00:18:59.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.715 "is_configured": false, 00:18:59.715 "data_offset": 0, 00:18:59.715 "data_size": 0 00:18:59.715 }, 00:18:59.715 { 00:18:59.715 "name": "BaseBdev2", 00:18:59.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.715 "is_configured": false, 00:18:59.715 "data_offset": 0, 00:18:59.715 "data_size": 0 00:18:59.715 } 00:18:59.715 ] 00:18:59.715 }' 00:18:59.715 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.715 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.283 [2024-11-20 09:31:25.441701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.283 [2024-11-20 09:31:25.441747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.283 [2024-11-20 09:31:25.453666] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.283 [2024-11-20 09:31:25.453714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.283 [2024-11-20 09:31:25.453724] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.283 [2024-11-20 09:31:25.453737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.283 [2024-11-20 09:31:25.503917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.283 BaseBdev1 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:00.283 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.284 [ 00:19:00.284 { 00:19:00.284 "name": "BaseBdev1", 00:19:00.284 "aliases": [ 00:19:00.284 "6468c53b-714e-4928-8343-15dceb578680" 00:19:00.284 ], 00:19:00.284 "product_name": "Malloc disk", 00:19:00.284 "block_size": 4128, 00:19:00.284 "num_blocks": 8192, 00:19:00.284 "uuid": "6468c53b-714e-4928-8343-15dceb578680", 00:19:00.284 "md_size": 32, 00:19:00.284 "md_interleave": true, 00:19:00.284 "dif_type": 0, 00:19:00.284 "assigned_rate_limits": { 00:19:00.284 "rw_ios_per_sec": 0, 00:19:00.284 "rw_mbytes_per_sec": 0, 00:19:00.284 "r_mbytes_per_sec": 0, 00:19:00.284 "w_mbytes_per_sec": 0 00:19:00.284 }, 00:19:00.284 "claimed": true, 00:19:00.284 "claim_type": "exclusive_write", 00:19:00.284 "zoned": false, 00:19:00.284 "supported_io_types": { 00:19:00.284 "read": true, 00:19:00.284 "write": true, 00:19:00.284 "unmap": true, 00:19:00.284 "flush": true, 00:19:00.284 "reset": true, 00:19:00.284 "nvme_admin": false, 00:19:00.284 "nvme_io": false, 00:19:00.284 "nvme_io_md": false, 00:19:00.284 "write_zeroes": true, 00:19:00.284 "zcopy": true, 00:19:00.284 "get_zone_info": false, 00:19:00.284 "zone_management": false, 00:19:00.284 "zone_append": false, 00:19:00.284 "compare": false, 00:19:00.284 "compare_and_write": false, 00:19:00.284 "abort": true, 00:19:00.284 "seek_hole": false, 00:19:00.284 "seek_data": false, 00:19:00.284 "copy": true, 00:19:00.284 "nvme_iov_md": false 00:19:00.284 }, 00:19:00.284 "memory_domains": [ 00:19:00.284 { 00:19:00.284 "dma_device_id": "system", 00:19:00.284 "dma_device_type": 1 00:19:00.284 }, 00:19:00.284 { 00:19:00.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.284 "dma_device_type": 2 00:19:00.284 } 00:19:00.284 ], 00:19:00.284 "driver_specific": {} 00:19:00.284 } 00:19:00.284 ] 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.284 "name": "Existed_Raid", 00:19:00.284 "uuid": "05aecb1b-d15b-4c92-88dd-5a2642ac4065", 00:19:00.284 "strip_size_kb": 0, 00:19:00.284 "state": "configuring", 00:19:00.284 "raid_level": "raid1", 00:19:00.284 "superblock": true, 00:19:00.284 "num_base_bdevs": 2, 00:19:00.284 "num_base_bdevs_discovered": 1, 00:19:00.284 "num_base_bdevs_operational": 2, 00:19:00.284 "base_bdevs_list": [ 00:19:00.284 { 00:19:00.284 "name": "BaseBdev1", 00:19:00.284 "uuid": "6468c53b-714e-4928-8343-15dceb578680", 00:19:00.284 "is_configured": true, 00:19:00.284 "data_offset": 256, 00:19:00.284 "data_size": 7936 00:19:00.284 }, 00:19:00.284 { 00:19:00.284 "name": "BaseBdev2", 00:19:00.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.284 "is_configured": false, 00:19:00.284 "data_offset": 0, 00:19:00.284 "data_size": 0 00:19:00.284 } 00:19:00.284 ] 00:19:00.284 }' 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.284 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.543 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:00.543 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.543 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.543 [2024-11-20 09:31:25.991213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.543 [2024-11-20 09:31:25.991273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:00.543 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.802 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:00.802 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.802 09:31:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.802 [2024-11-20 09:31:26.003240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.802 [2024-11-20 09:31:26.005064] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.802 [2024-11-20 09:31:26.005109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.802 "name": "Existed_Raid", 00:19:00.802 "uuid": "ee6bcdab-65eb-477f-b12d-175817939080", 00:19:00.802 "strip_size_kb": 0, 00:19:00.802 "state": "configuring", 00:19:00.802 "raid_level": "raid1", 00:19:00.802 "superblock": true, 00:19:00.802 "num_base_bdevs": 2, 00:19:00.802 "num_base_bdevs_discovered": 1, 00:19:00.802 "num_base_bdevs_operational": 2, 00:19:00.802 "base_bdevs_list": [ 00:19:00.802 { 00:19:00.802 "name": "BaseBdev1", 00:19:00.802 "uuid": "6468c53b-714e-4928-8343-15dceb578680", 00:19:00.802 "is_configured": true, 00:19:00.802 "data_offset": 256, 00:19:00.802 "data_size": 7936 00:19:00.802 }, 00:19:00.802 { 00:19:00.802 "name": "BaseBdev2", 00:19:00.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.802 "is_configured": false, 00:19:00.802 "data_offset": 0, 00:19:00.802 "data_size": 0 00:19:00.802 } 00:19:00.802 ] 00:19:00.802 }' 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.802 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.061 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:01.061 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.061 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.061 [2024-11-20 09:31:26.505662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.061 [2024-11-20 09:31:26.505872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:01.061 [2024-11-20 09:31:26.505885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:01.061 [2024-11-20 09:31:26.505970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:01.061 [2024-11-20 09:31:26.506039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:01.061 [2024-11-20 09:31:26.506054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:01.062 [2024-11-20 09:31:26.506113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.062 BaseBdev2 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.062 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.321 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.321 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:01.321 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.321 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.321 [ 00:19:01.321 { 00:19:01.321 "name": "BaseBdev2", 00:19:01.321 "aliases": [ 00:19:01.321 "e9ae5455-894f-4d38-9427-731a7f2e2570" 00:19:01.321 ], 00:19:01.321 "product_name": "Malloc disk", 00:19:01.321 "block_size": 4128, 00:19:01.321 "num_blocks": 8192, 00:19:01.321 "uuid": "e9ae5455-894f-4d38-9427-731a7f2e2570", 00:19:01.321 "md_size": 32, 00:19:01.321 "md_interleave": true, 00:19:01.321 "dif_type": 0, 00:19:01.321 "assigned_rate_limits": { 00:19:01.321 "rw_ios_per_sec": 0, 00:19:01.321 "rw_mbytes_per_sec": 0, 00:19:01.321 "r_mbytes_per_sec": 0, 00:19:01.321 "w_mbytes_per_sec": 0 00:19:01.321 }, 00:19:01.321 "claimed": true, 00:19:01.321 "claim_type": "exclusive_write", 00:19:01.321 "zoned": false, 00:19:01.321 "supported_io_types": { 00:19:01.321 "read": true, 00:19:01.321 "write": true, 00:19:01.321 "unmap": true, 00:19:01.321 "flush": true, 00:19:01.321 "reset": true, 00:19:01.321 "nvme_admin": false, 00:19:01.321 "nvme_io": false, 00:19:01.321 "nvme_io_md": false, 00:19:01.321 "write_zeroes": true, 00:19:01.321 "zcopy": true, 00:19:01.321 "get_zone_info": false, 00:19:01.321 "zone_management": false, 00:19:01.321 "zone_append": false, 00:19:01.321 "compare": false, 00:19:01.321 "compare_and_write": false, 00:19:01.321 "abort": true, 00:19:01.321 "seek_hole": false, 00:19:01.321 "seek_data": false, 00:19:01.321 "copy": true, 00:19:01.321 "nvme_iov_md": false 00:19:01.321 }, 00:19:01.321 "memory_domains": [ 00:19:01.321 { 00:19:01.321 "dma_device_id": "system", 00:19:01.321 "dma_device_type": 1 00:19:01.321 }, 00:19:01.321 { 00:19:01.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.321 "dma_device_type": 2 00:19:01.321 } 00:19:01.322 ], 00:19:01.322 "driver_specific": {} 00:19:01.322 } 00:19:01.322 ] 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.322 "name": "Existed_Raid", 00:19:01.322 "uuid": "ee6bcdab-65eb-477f-b12d-175817939080", 00:19:01.322 "strip_size_kb": 0, 00:19:01.322 "state": "online", 00:19:01.322 "raid_level": "raid1", 00:19:01.322 "superblock": true, 00:19:01.322 "num_base_bdevs": 2, 00:19:01.322 "num_base_bdevs_discovered": 2, 00:19:01.322 "num_base_bdevs_operational": 2, 00:19:01.322 "base_bdevs_list": [ 00:19:01.322 { 00:19:01.322 "name": "BaseBdev1", 00:19:01.322 "uuid": "6468c53b-714e-4928-8343-15dceb578680", 00:19:01.322 "is_configured": true, 00:19:01.322 "data_offset": 256, 00:19:01.322 "data_size": 7936 00:19:01.322 }, 00:19:01.322 { 00:19:01.322 "name": "BaseBdev2", 00:19:01.322 "uuid": "e9ae5455-894f-4d38-9427-731a7f2e2570", 00:19:01.322 "is_configured": true, 00:19:01.322 "data_offset": 256, 00:19:01.322 "data_size": 7936 00:19:01.322 } 00:19:01.322 ] 00:19:01.322 }' 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.322 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.581 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:01.581 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:01.581 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:01.581 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:01.581 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:01.581 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:01.581 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:01.582 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.582 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:01.582 09:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.582 [2024-11-20 09:31:26.997193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.582 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.841 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.841 "name": "Existed_Raid", 00:19:01.841 "aliases": [ 00:19:01.841 "ee6bcdab-65eb-477f-b12d-175817939080" 00:19:01.841 ], 00:19:01.841 "product_name": "Raid Volume", 00:19:01.841 "block_size": 4128, 00:19:01.841 "num_blocks": 7936, 00:19:01.841 "uuid": "ee6bcdab-65eb-477f-b12d-175817939080", 00:19:01.841 "md_size": 32, 00:19:01.841 "md_interleave": true, 00:19:01.841 "dif_type": 0, 00:19:01.841 "assigned_rate_limits": { 00:19:01.841 "rw_ios_per_sec": 0, 00:19:01.841 "rw_mbytes_per_sec": 0, 00:19:01.841 "r_mbytes_per_sec": 0, 00:19:01.841 "w_mbytes_per_sec": 0 00:19:01.841 }, 00:19:01.841 "claimed": false, 00:19:01.841 "zoned": false, 00:19:01.841 "supported_io_types": { 00:19:01.841 "read": true, 00:19:01.841 "write": true, 00:19:01.841 "unmap": false, 00:19:01.841 "flush": false, 00:19:01.841 "reset": true, 00:19:01.841 "nvme_admin": false, 00:19:01.841 "nvme_io": false, 00:19:01.841 "nvme_io_md": false, 00:19:01.841 "write_zeroes": true, 00:19:01.841 "zcopy": false, 00:19:01.841 "get_zone_info": false, 00:19:01.841 "zone_management": false, 00:19:01.841 "zone_append": false, 00:19:01.841 "compare": false, 00:19:01.841 "compare_and_write": false, 00:19:01.841 "abort": false, 00:19:01.841 "seek_hole": false, 00:19:01.841 "seek_data": false, 00:19:01.841 "copy": false, 00:19:01.841 "nvme_iov_md": false 00:19:01.841 }, 00:19:01.841 "memory_domains": [ 00:19:01.841 { 00:19:01.841 "dma_device_id": "system", 00:19:01.841 "dma_device_type": 1 00:19:01.841 }, 00:19:01.841 { 00:19:01.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.841 "dma_device_type": 2 00:19:01.841 }, 00:19:01.841 { 00:19:01.841 "dma_device_id": "system", 00:19:01.841 "dma_device_type": 1 00:19:01.841 }, 00:19:01.841 { 00:19:01.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.841 "dma_device_type": 2 00:19:01.841 } 00:19:01.841 ], 00:19:01.841 "driver_specific": { 00:19:01.841 "raid": { 00:19:01.841 "uuid": "ee6bcdab-65eb-477f-b12d-175817939080", 00:19:01.841 "strip_size_kb": 0, 00:19:01.841 "state": "online", 00:19:01.841 "raid_level": "raid1", 00:19:01.841 "superblock": true, 00:19:01.841 "num_base_bdevs": 2, 00:19:01.841 "num_base_bdevs_discovered": 2, 00:19:01.841 "num_base_bdevs_operational": 2, 00:19:01.841 "base_bdevs_list": [ 00:19:01.841 { 00:19:01.841 "name": "BaseBdev1", 00:19:01.841 "uuid": "6468c53b-714e-4928-8343-15dceb578680", 00:19:01.841 "is_configured": true, 00:19:01.841 "data_offset": 256, 00:19:01.841 "data_size": 7936 00:19:01.841 }, 00:19:01.841 { 00:19:01.841 "name": "BaseBdev2", 00:19:01.841 "uuid": "e9ae5455-894f-4d38-9427-731a7f2e2570", 00:19:01.841 "is_configured": true, 00:19:01.841 "data_offset": 256, 00:19:01.841 "data_size": 7936 00:19:01.841 } 00:19:01.841 ] 00:19:01.841 } 00:19:01.841 } 00:19:01.841 }' 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:01.842 BaseBdev2' 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.842 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.842 [2024-11-20 09:31:27.236574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:02.100 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.100 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.101 "name": "Existed_Raid", 00:19:02.101 "uuid": "ee6bcdab-65eb-477f-b12d-175817939080", 00:19:02.101 "strip_size_kb": 0, 00:19:02.101 "state": "online", 00:19:02.101 "raid_level": "raid1", 00:19:02.101 "superblock": true, 00:19:02.101 "num_base_bdevs": 2, 00:19:02.101 "num_base_bdevs_discovered": 1, 00:19:02.101 "num_base_bdevs_operational": 1, 00:19:02.101 "base_bdevs_list": [ 00:19:02.101 { 00:19:02.101 "name": null, 00:19:02.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.101 "is_configured": false, 00:19:02.101 "data_offset": 0, 00:19:02.101 "data_size": 7936 00:19:02.101 }, 00:19:02.101 { 00:19:02.101 "name": "BaseBdev2", 00:19:02.101 "uuid": "e9ae5455-894f-4d38-9427-731a7f2e2570", 00:19:02.101 "is_configured": true, 00:19:02.101 "data_offset": 256, 00:19:02.101 "data_size": 7936 00:19:02.101 } 00:19:02.101 ] 00:19:02.101 }' 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.101 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.668 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.669 [2024-11-20 09:31:27.877586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:02.669 [2024-11-20 09:31:27.877694] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.669 [2024-11-20 09:31:27.977940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.669 [2024-11-20 09:31:27.977991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.669 [2024-11-20 09:31:27.978002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.669 09:31:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88924 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88924 ']' 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88924 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88924 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.669 killing process with pid 88924 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88924' 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88924 00:19:02.669 [2024-11-20 09:31:28.074124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:02.669 09:31:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88924 00:19:02.669 [2024-11-20 09:31:28.090848] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.055 09:31:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:04.055 00:19:04.055 real 0m5.130s 00:19:04.055 user 0m7.424s 00:19:04.055 sys 0m0.887s 00:19:04.055 09:31:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.055 09:31:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.055 ************************************ 00:19:04.055 END TEST raid_state_function_test_sb_md_interleaved 00:19:04.055 ************************************ 00:19:04.055 09:31:29 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:04.055 09:31:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:04.055 09:31:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.055 09:31:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.055 ************************************ 00:19:04.055 START TEST raid_superblock_test_md_interleaved 00:19:04.055 ************************************ 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89171 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89171 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89171 ']' 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.055 09:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.055 [2024-11-20 09:31:29.367203] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:19:04.055 [2024-11-20 09:31:29.367331] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89171 ] 00:19:04.324 [2024-11-20 09:31:29.545803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.324 [2024-11-20 09:31:29.664160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.584 [2024-11-20 09:31:29.877947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.584 [2024-11-20 09:31:29.878010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.842 malloc1 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.842 [2024-11-20 09:31:30.258799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.842 [2024-11-20 09:31:30.258855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.842 [2024-11-20 09:31:30.258874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:04.842 [2024-11-20 09:31:30.258883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.842 [2024-11-20 09:31:30.260661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.842 [2024-11-20 09:31:30.260699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.842 pt1 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:04.842 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:04.843 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.843 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.843 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.843 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:04.843 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.843 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.102 malloc2 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.102 [2024-11-20 09:31:30.314553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.102 [2024-11-20 09:31:30.314615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.102 [2024-11-20 09:31:30.314635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:05.102 [2024-11-20 09:31:30.314644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.102 [2024-11-20 09:31:30.316478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.102 [2024-11-20 09:31:30.316513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.102 pt2 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.102 [2024-11-20 09:31:30.326593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:05.102 [2024-11-20 09:31:30.328338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.102 [2024-11-20 09:31:30.328537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:05.102 [2024-11-20 09:31:30.328558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:05.102 [2024-11-20 09:31:30.328628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:05.102 [2024-11-20 09:31:30.328704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:05.102 [2024-11-20 09:31:30.328729] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:05.102 [2024-11-20 09:31:30.328797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.102 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.103 "name": "raid_bdev1", 00:19:05.103 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:05.103 "strip_size_kb": 0, 00:19:05.103 "state": "online", 00:19:05.103 "raid_level": "raid1", 00:19:05.103 "superblock": true, 00:19:05.103 "num_base_bdevs": 2, 00:19:05.103 "num_base_bdevs_discovered": 2, 00:19:05.103 "num_base_bdevs_operational": 2, 00:19:05.103 "base_bdevs_list": [ 00:19:05.103 { 00:19:05.103 "name": "pt1", 00:19:05.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.103 "is_configured": true, 00:19:05.103 "data_offset": 256, 00:19:05.103 "data_size": 7936 00:19:05.103 }, 00:19:05.103 { 00:19:05.103 "name": "pt2", 00:19:05.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.103 "is_configured": true, 00:19:05.103 "data_offset": 256, 00:19:05.103 "data_size": 7936 00:19:05.103 } 00:19:05.103 ] 00:19:05.103 }' 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.103 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.362 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.363 [2024-11-20 09:31:30.770122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.363 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.363 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.363 "name": "raid_bdev1", 00:19:05.363 "aliases": [ 00:19:05.363 "49af0dab-3946-431d-a3a1-f286c8446c8a" 00:19:05.363 ], 00:19:05.363 "product_name": "Raid Volume", 00:19:05.363 "block_size": 4128, 00:19:05.363 "num_blocks": 7936, 00:19:05.363 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:05.363 "md_size": 32, 00:19:05.363 "md_interleave": true, 00:19:05.363 "dif_type": 0, 00:19:05.363 "assigned_rate_limits": { 00:19:05.363 "rw_ios_per_sec": 0, 00:19:05.363 "rw_mbytes_per_sec": 0, 00:19:05.363 "r_mbytes_per_sec": 0, 00:19:05.363 "w_mbytes_per_sec": 0 00:19:05.363 }, 00:19:05.363 "claimed": false, 00:19:05.363 "zoned": false, 00:19:05.363 "supported_io_types": { 00:19:05.363 "read": true, 00:19:05.363 "write": true, 00:19:05.363 "unmap": false, 00:19:05.363 "flush": false, 00:19:05.363 "reset": true, 00:19:05.363 "nvme_admin": false, 00:19:05.363 "nvme_io": false, 00:19:05.363 "nvme_io_md": false, 00:19:05.363 "write_zeroes": true, 00:19:05.363 "zcopy": false, 00:19:05.363 "get_zone_info": false, 00:19:05.363 "zone_management": false, 00:19:05.363 "zone_append": false, 00:19:05.363 "compare": false, 00:19:05.363 "compare_and_write": false, 00:19:05.363 "abort": false, 00:19:05.363 "seek_hole": false, 00:19:05.363 "seek_data": false, 00:19:05.363 "copy": false, 00:19:05.363 "nvme_iov_md": false 00:19:05.363 }, 00:19:05.363 "memory_domains": [ 00:19:05.363 { 00:19:05.363 "dma_device_id": "system", 00:19:05.363 "dma_device_type": 1 00:19:05.363 }, 00:19:05.363 { 00:19:05.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.363 "dma_device_type": 2 00:19:05.363 }, 00:19:05.363 { 00:19:05.363 "dma_device_id": "system", 00:19:05.363 "dma_device_type": 1 00:19:05.363 }, 00:19:05.363 { 00:19:05.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.363 "dma_device_type": 2 00:19:05.363 } 00:19:05.363 ], 00:19:05.363 "driver_specific": { 00:19:05.363 "raid": { 00:19:05.363 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:05.363 "strip_size_kb": 0, 00:19:05.363 "state": "online", 00:19:05.363 "raid_level": "raid1", 00:19:05.363 "superblock": true, 00:19:05.363 "num_base_bdevs": 2, 00:19:05.363 "num_base_bdevs_discovered": 2, 00:19:05.363 "num_base_bdevs_operational": 2, 00:19:05.363 "base_bdevs_list": [ 00:19:05.363 { 00:19:05.363 "name": "pt1", 00:19:05.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.363 "is_configured": true, 00:19:05.363 "data_offset": 256, 00:19:05.363 "data_size": 7936 00:19:05.363 }, 00:19:05.363 { 00:19:05.363 "name": "pt2", 00:19:05.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.363 "is_configured": true, 00:19:05.363 "data_offset": 256, 00:19:05.363 "data_size": 7936 00:19:05.363 } 00:19:05.363 ] 00:19:05.363 } 00:19:05.363 } 00:19:05.363 }' 00:19:05.363 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:05.623 pt2' 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.623 09:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.623 [2024-11-20 09:31:31.009712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49af0dab-3946-431d-a3a1-f286c8446c8a 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 49af0dab-3946-431d-a3a1-f286c8446c8a ']' 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.623 [2024-11-20 09:31:31.053312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.623 [2024-11-20 09:31:31.053345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.623 [2024-11-20 09:31:31.053456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.623 [2024-11-20 09:31:31.053515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.623 [2024-11-20 09:31:31.053528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:05.623 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.883 [2024-11-20 09:31:31.193083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:05.883 [2024-11-20 09:31:31.194960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:05.883 [2024-11-20 09:31:31.195048] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:05.883 [2024-11-20 09:31:31.195098] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:05.883 [2024-11-20 09:31:31.195112] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.883 [2024-11-20 09:31:31.195130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:05.883 request: 00:19:05.883 { 00:19:05.883 "name": "raid_bdev1", 00:19:05.883 "raid_level": "raid1", 00:19:05.883 "base_bdevs": [ 00:19:05.883 "malloc1", 00:19:05.883 "malloc2" 00:19:05.883 ], 00:19:05.883 "superblock": false, 00:19:05.883 "method": "bdev_raid_create", 00:19:05.883 "req_id": 1 00:19:05.883 } 00:19:05.883 Got JSON-RPC error response 00:19:05.883 response: 00:19:05.883 { 00:19:05.883 "code": -17, 00:19:05.883 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:05.883 } 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.883 [2024-11-20 09:31:31.248965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:05.883 [2024-11-20 09:31:31.249083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.883 [2024-11-20 09:31:31.249122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:05.883 [2024-11-20 09:31:31.249158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.883 [2024-11-20 09:31:31.251335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.883 [2024-11-20 09:31:31.251413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:05.883 [2024-11-20 09:31:31.251495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:05.883 [2024-11-20 09:31:31.251600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:05.883 pt1 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:05.883 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.884 "name": "raid_bdev1", 00:19:05.884 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:05.884 "strip_size_kb": 0, 00:19:05.884 "state": "configuring", 00:19:05.884 "raid_level": "raid1", 00:19:05.884 "superblock": true, 00:19:05.884 "num_base_bdevs": 2, 00:19:05.884 "num_base_bdevs_discovered": 1, 00:19:05.884 "num_base_bdevs_operational": 2, 00:19:05.884 "base_bdevs_list": [ 00:19:05.884 { 00:19:05.884 "name": "pt1", 00:19:05.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.884 "is_configured": true, 00:19:05.884 "data_offset": 256, 00:19:05.884 "data_size": 7936 00:19:05.884 }, 00:19:05.884 { 00:19:05.884 "name": null, 00:19:05.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.884 "is_configured": false, 00:19:05.884 "data_offset": 256, 00:19:05.884 "data_size": 7936 00:19:05.884 } 00:19:05.884 ] 00:19:05.884 }' 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.884 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.452 [2024-11-20 09:31:31.744191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.452 [2024-11-20 09:31:31.744332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.452 [2024-11-20 09:31:31.744376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:06.452 [2024-11-20 09:31:31.744413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.452 [2024-11-20 09:31:31.744627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.452 [2024-11-20 09:31:31.744678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.452 [2024-11-20 09:31:31.744760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:06.452 [2024-11-20 09:31:31.744815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.452 [2024-11-20 09:31:31.744939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:06.452 [2024-11-20 09:31:31.744981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:06.452 [2024-11-20 09:31:31.745079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:06.452 [2024-11-20 09:31:31.745198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:06.452 [2024-11-20 09:31:31.745237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:06.452 [2024-11-20 09:31:31.745359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.452 pt2 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.452 "name": "raid_bdev1", 00:19:06.452 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:06.452 "strip_size_kb": 0, 00:19:06.452 "state": "online", 00:19:06.452 "raid_level": "raid1", 00:19:06.452 "superblock": true, 00:19:06.452 "num_base_bdevs": 2, 00:19:06.452 "num_base_bdevs_discovered": 2, 00:19:06.452 "num_base_bdevs_operational": 2, 00:19:06.452 "base_bdevs_list": [ 00:19:06.452 { 00:19:06.452 "name": "pt1", 00:19:06.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.452 "is_configured": true, 00:19:06.452 "data_offset": 256, 00:19:06.452 "data_size": 7936 00:19:06.452 }, 00:19:06.452 { 00:19:06.452 "name": "pt2", 00:19:06.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.452 "is_configured": true, 00:19:06.452 "data_offset": 256, 00:19:06.452 "data_size": 7936 00:19:06.452 } 00:19:06.452 ] 00:19:06.452 }' 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.452 09:31:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.020 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:07.020 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:07.020 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:07.020 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:07.020 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:07.020 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:07.021 [2024-11-20 09:31:32.219707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:07.021 "name": "raid_bdev1", 00:19:07.021 "aliases": [ 00:19:07.021 "49af0dab-3946-431d-a3a1-f286c8446c8a" 00:19:07.021 ], 00:19:07.021 "product_name": "Raid Volume", 00:19:07.021 "block_size": 4128, 00:19:07.021 "num_blocks": 7936, 00:19:07.021 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:07.021 "md_size": 32, 00:19:07.021 "md_interleave": true, 00:19:07.021 "dif_type": 0, 00:19:07.021 "assigned_rate_limits": { 00:19:07.021 "rw_ios_per_sec": 0, 00:19:07.021 "rw_mbytes_per_sec": 0, 00:19:07.021 "r_mbytes_per_sec": 0, 00:19:07.021 "w_mbytes_per_sec": 0 00:19:07.021 }, 00:19:07.021 "claimed": false, 00:19:07.021 "zoned": false, 00:19:07.021 "supported_io_types": { 00:19:07.021 "read": true, 00:19:07.021 "write": true, 00:19:07.021 "unmap": false, 00:19:07.021 "flush": false, 00:19:07.021 "reset": true, 00:19:07.021 "nvme_admin": false, 00:19:07.021 "nvme_io": false, 00:19:07.021 "nvme_io_md": false, 00:19:07.021 "write_zeroes": true, 00:19:07.021 "zcopy": false, 00:19:07.021 "get_zone_info": false, 00:19:07.021 "zone_management": false, 00:19:07.021 "zone_append": false, 00:19:07.021 "compare": false, 00:19:07.021 "compare_and_write": false, 00:19:07.021 "abort": false, 00:19:07.021 "seek_hole": false, 00:19:07.021 "seek_data": false, 00:19:07.021 "copy": false, 00:19:07.021 "nvme_iov_md": false 00:19:07.021 }, 00:19:07.021 "memory_domains": [ 00:19:07.021 { 00:19:07.021 "dma_device_id": "system", 00:19:07.021 "dma_device_type": 1 00:19:07.021 }, 00:19:07.021 { 00:19:07.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.021 "dma_device_type": 2 00:19:07.021 }, 00:19:07.021 { 00:19:07.021 "dma_device_id": "system", 00:19:07.021 "dma_device_type": 1 00:19:07.021 }, 00:19:07.021 { 00:19:07.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.021 "dma_device_type": 2 00:19:07.021 } 00:19:07.021 ], 00:19:07.021 "driver_specific": { 00:19:07.021 "raid": { 00:19:07.021 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:07.021 "strip_size_kb": 0, 00:19:07.021 "state": "online", 00:19:07.021 "raid_level": "raid1", 00:19:07.021 "superblock": true, 00:19:07.021 "num_base_bdevs": 2, 00:19:07.021 "num_base_bdevs_discovered": 2, 00:19:07.021 "num_base_bdevs_operational": 2, 00:19:07.021 "base_bdevs_list": [ 00:19:07.021 { 00:19:07.021 "name": "pt1", 00:19:07.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:07.021 "is_configured": true, 00:19:07.021 "data_offset": 256, 00:19:07.021 "data_size": 7936 00:19:07.021 }, 00:19:07.021 { 00:19:07.021 "name": "pt2", 00:19:07.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.021 "is_configured": true, 00:19:07.021 "data_offset": 256, 00:19:07.021 "data_size": 7936 00:19:07.021 } 00:19:07.021 ] 00:19:07.021 } 00:19:07.021 } 00:19:07.021 }' 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:07.021 pt2' 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.021 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:07.021 [2024-11-20 09:31:32.471341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 49af0dab-3946-431d-a3a1-f286c8446c8a '!=' 49af0dab-3946-431d-a3a1-f286c8446c8a ']' 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.280 [2024-11-20 09:31:32.522988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.280 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.281 "name": "raid_bdev1", 00:19:07.281 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:07.281 "strip_size_kb": 0, 00:19:07.281 "state": "online", 00:19:07.281 "raid_level": "raid1", 00:19:07.281 "superblock": true, 00:19:07.281 "num_base_bdevs": 2, 00:19:07.281 "num_base_bdevs_discovered": 1, 00:19:07.281 "num_base_bdevs_operational": 1, 00:19:07.281 "base_bdevs_list": [ 00:19:07.281 { 00:19:07.281 "name": null, 00:19:07.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.281 "is_configured": false, 00:19:07.281 "data_offset": 0, 00:19:07.281 "data_size": 7936 00:19:07.281 }, 00:19:07.281 { 00:19:07.281 "name": "pt2", 00:19:07.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.281 "is_configured": true, 00:19:07.281 "data_offset": 256, 00:19:07.281 "data_size": 7936 00:19:07.281 } 00:19:07.281 ] 00:19:07.281 }' 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.281 09:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.849 [2024-11-20 09:31:33.026084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.849 [2024-11-20 09:31:33.026179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.849 [2024-11-20 09:31:33.026275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.849 [2024-11-20 09:31:33.026343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.849 [2024-11-20 09:31:33.026388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.849 [2024-11-20 09:31:33.109938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.849 [2024-11-20 09:31:33.110006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.849 [2024-11-20 09:31:33.110023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:07.849 [2024-11-20 09:31:33.110034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.849 [2024-11-20 09:31:33.112115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.849 [2024-11-20 09:31:33.112217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.849 [2024-11-20 09:31:33.112280] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:07.849 [2024-11-20 09:31:33.112332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.849 [2024-11-20 09:31:33.112404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:07.849 [2024-11-20 09:31:33.112418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:07.849 [2024-11-20 09:31:33.112530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:07.849 [2024-11-20 09:31:33.112603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:07.849 [2024-11-20 09:31:33.112611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:07.849 [2024-11-20 09:31:33.112681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.849 pt2 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.849 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.850 "name": "raid_bdev1", 00:19:07.850 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:07.850 "strip_size_kb": 0, 00:19:07.850 "state": "online", 00:19:07.850 "raid_level": "raid1", 00:19:07.850 "superblock": true, 00:19:07.850 "num_base_bdevs": 2, 00:19:07.850 "num_base_bdevs_discovered": 1, 00:19:07.850 "num_base_bdevs_operational": 1, 00:19:07.850 "base_bdevs_list": [ 00:19:07.850 { 00:19:07.850 "name": null, 00:19:07.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.850 "is_configured": false, 00:19:07.850 "data_offset": 256, 00:19:07.850 "data_size": 7936 00:19:07.850 }, 00:19:07.850 { 00:19:07.850 "name": "pt2", 00:19:07.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.850 "is_configured": true, 00:19:07.850 "data_offset": 256, 00:19:07.850 "data_size": 7936 00:19:07.850 } 00:19:07.850 ] 00:19:07.850 }' 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.850 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 [2024-11-20 09:31:33.613083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.418 [2024-11-20 09:31:33.613206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.418 [2024-11-20 09:31:33.613306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.418 [2024-11-20 09:31:33.613378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.418 [2024-11-20 09:31:33.613469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 [2024-11-20 09:31:33.677019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:08.418 [2024-11-20 09:31:33.677090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.418 [2024-11-20 09:31:33.677114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:08.418 [2024-11-20 09:31:33.677134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.418 [2024-11-20 09:31:33.679271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.418 [2024-11-20 09:31:33.679308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:08.418 [2024-11-20 09:31:33.679367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:08.418 [2024-11-20 09:31:33.679415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:08.418 [2024-11-20 09:31:33.679531] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:08.418 [2024-11-20 09:31:33.679544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.418 [2024-11-20 09:31:33.679571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:08.418 [2024-11-20 09:31:33.679641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:08.418 [2024-11-20 09:31:33.679717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:08.418 [2024-11-20 09:31:33.679726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:08.418 [2024-11-20 09:31:33.679797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:08.418 [2024-11-20 09:31:33.679864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:08.418 [2024-11-20 09:31:33.679879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:08.418 [2024-11-20 09:31:33.679952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.418 pt1 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.418 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.419 "name": "raid_bdev1", 00:19:08.419 "uuid": "49af0dab-3946-431d-a3a1-f286c8446c8a", 00:19:08.419 "strip_size_kb": 0, 00:19:08.419 "state": "online", 00:19:08.419 "raid_level": "raid1", 00:19:08.419 "superblock": true, 00:19:08.419 "num_base_bdevs": 2, 00:19:08.419 "num_base_bdevs_discovered": 1, 00:19:08.419 "num_base_bdevs_operational": 1, 00:19:08.419 "base_bdevs_list": [ 00:19:08.419 { 00:19:08.419 "name": null, 00:19:08.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.419 "is_configured": false, 00:19:08.419 "data_offset": 256, 00:19:08.419 "data_size": 7936 00:19:08.419 }, 00:19:08.419 { 00:19:08.419 "name": "pt2", 00:19:08.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.419 "is_configured": true, 00:19:08.419 "data_offset": 256, 00:19:08.419 "data_size": 7936 00:19:08.419 } 00:19:08.419 ] 00:19:08.419 }' 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.419 09:31:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:08.988 [2024-11-20 09:31:34.176419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 49af0dab-3946-431d-a3a1-f286c8446c8a '!=' 49af0dab-3946-431d-a3a1-f286c8446c8a ']' 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89171 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89171 ']' 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89171 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89171 00:19:08.988 killing process with pid 89171 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89171' 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89171 00:19:08.988 [2024-11-20 09:31:34.246079] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.988 [2024-11-20 09:31:34.246186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.988 [2024-11-20 09:31:34.246241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.988 09:31:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89171 00:19:08.988 [2024-11-20 09:31:34.246257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:09.247 [2024-11-20 09:31:34.465910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.183 09:31:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:10.183 00:19:10.183 real 0m6.334s 00:19:10.183 user 0m9.587s 00:19:10.183 sys 0m1.157s 00:19:10.183 09:31:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.183 09:31:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.183 ************************************ 00:19:10.183 END TEST raid_superblock_test_md_interleaved 00:19:10.183 ************************************ 00:19:10.441 09:31:35 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:10.441 09:31:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:10.441 09:31:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.441 09:31:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.442 ************************************ 00:19:10.442 START TEST raid_rebuild_test_sb_md_interleaved 00:19:10.442 ************************************ 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89500 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89500 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89500 ']' 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.442 09:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.442 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:10.442 Zero copy mechanism will not be used. 00:19:10.442 [2024-11-20 09:31:35.771988] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:19:10.442 [2024-11-20 09:31:35.772098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89500 ] 00:19:10.701 [2024-11-20 09:31:35.947890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.701 [2024-11-20 09:31:36.063012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.960 [2024-11-20 09:31:36.275933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.960 [2024-11-20 09:31:36.275995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.220 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.220 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:11.220 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.220 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:11.220 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.220 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 BaseBdev1_malloc 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 [2024-11-20 09:31:36.688299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:11.480 [2024-11-20 09:31:36.688395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.480 [2024-11-20 09:31:36.688419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:11.480 [2024-11-20 09:31:36.688443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.480 [2024-11-20 09:31:36.690498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.480 [2024-11-20 09:31:36.690541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.480 BaseBdev1 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 BaseBdev2_malloc 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 [2024-11-20 09:31:36.749528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:11.480 [2024-11-20 09:31:36.749602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.480 [2024-11-20 09:31:36.749625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:11.480 [2024-11-20 09:31:36.749639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.480 [2024-11-20 09:31:36.751721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.480 [2024-11-20 09:31:36.751760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.480 BaseBdev2 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 spare_malloc 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 spare_delay 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 [2024-11-20 09:31:36.841976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:11.480 [2024-11-20 09:31:36.842042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.480 [2024-11-20 09:31:36.842064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:11.480 [2024-11-20 09:31:36.842075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.480 [2024-11-20 09:31:36.844030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.480 [2024-11-20 09:31:36.844072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:11.480 spare 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 [2024-11-20 09:31:36.854021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.480 [2024-11-20 09:31:36.855939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.480 [2024-11-20 09:31:36.856161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:11.480 [2024-11-20 09:31:36.856186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:11.480 [2024-11-20 09:31:36.856281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:11.480 [2024-11-20 09:31:36.856362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:11.480 [2024-11-20 09:31:36.856374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:11.480 [2024-11-20 09:31:36.856456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.480 "name": "raid_bdev1", 00:19:11.480 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:11.480 "strip_size_kb": 0, 00:19:11.480 "state": "online", 00:19:11.480 "raid_level": "raid1", 00:19:11.480 "superblock": true, 00:19:11.480 "num_base_bdevs": 2, 00:19:11.480 "num_base_bdevs_discovered": 2, 00:19:11.480 "num_base_bdevs_operational": 2, 00:19:11.480 "base_bdevs_list": [ 00:19:11.480 { 00:19:11.480 "name": "BaseBdev1", 00:19:11.480 "uuid": "97abe784-47bc-5b02-81fb-f2eb5e32300e", 00:19:11.480 "is_configured": true, 00:19:11.480 "data_offset": 256, 00:19:11.480 "data_size": 7936 00:19:11.480 }, 00:19:11.480 { 00:19:11.480 "name": "BaseBdev2", 00:19:11.480 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:11.480 "is_configured": true, 00:19:11.480 "data_offset": 256, 00:19:11.480 "data_size": 7936 00:19:11.480 } 00:19:11.480 ] 00:19:11.480 }' 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.480 09:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.050 [2024-11-20 09:31:37.333618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.050 [2024-11-20 09:31:37.425091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.050 "name": "raid_bdev1", 00:19:12.050 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:12.050 "strip_size_kb": 0, 00:19:12.050 "state": "online", 00:19:12.050 "raid_level": "raid1", 00:19:12.050 "superblock": true, 00:19:12.050 "num_base_bdevs": 2, 00:19:12.050 "num_base_bdevs_discovered": 1, 00:19:12.050 "num_base_bdevs_operational": 1, 00:19:12.050 "base_bdevs_list": [ 00:19:12.050 { 00:19:12.050 "name": null, 00:19:12.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.050 "is_configured": false, 00:19:12.050 "data_offset": 0, 00:19:12.050 "data_size": 7936 00:19:12.050 }, 00:19:12.050 { 00:19:12.050 "name": "BaseBdev2", 00:19:12.050 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:12.050 "is_configured": true, 00:19:12.050 "data_offset": 256, 00:19:12.050 "data_size": 7936 00:19:12.050 } 00:19:12.050 ] 00:19:12.050 }' 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.050 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.620 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.620 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.620 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.620 [2024-11-20 09:31:37.844393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.620 [2024-11-20 09:31:37.862787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:12.620 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.620 09:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:12.620 [2024-11-20 09:31:37.864555] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.563 "name": "raid_bdev1", 00:19:13.563 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:13.563 "strip_size_kb": 0, 00:19:13.563 "state": "online", 00:19:13.563 "raid_level": "raid1", 00:19:13.563 "superblock": true, 00:19:13.563 "num_base_bdevs": 2, 00:19:13.563 "num_base_bdevs_discovered": 2, 00:19:13.563 "num_base_bdevs_operational": 2, 00:19:13.563 "process": { 00:19:13.563 "type": "rebuild", 00:19:13.563 "target": "spare", 00:19:13.563 "progress": { 00:19:13.563 "blocks": 2560, 00:19:13.563 "percent": 32 00:19:13.563 } 00:19:13.563 }, 00:19:13.563 "base_bdevs_list": [ 00:19:13.563 { 00:19:13.563 "name": "spare", 00:19:13.563 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:13.563 "is_configured": true, 00:19:13.563 "data_offset": 256, 00:19:13.563 "data_size": 7936 00:19:13.563 }, 00:19:13.563 { 00:19:13.563 "name": "BaseBdev2", 00:19:13.563 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:13.563 "is_configured": true, 00:19:13.563 "data_offset": 256, 00:19:13.563 "data_size": 7936 00:19:13.563 } 00:19:13.563 ] 00:19:13.563 }' 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.563 09:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.832 [2024-11-20 09:31:39.023834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.832 [2024-11-20 09:31:39.070182] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.832 [2024-11-20 09:31:39.070257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.832 [2024-11-20 09:31:39.070272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.832 [2024-11-20 09:31:39.070281] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.832 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.832 "name": "raid_bdev1", 00:19:13.832 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:13.832 "strip_size_kb": 0, 00:19:13.832 "state": "online", 00:19:13.832 "raid_level": "raid1", 00:19:13.832 "superblock": true, 00:19:13.832 "num_base_bdevs": 2, 00:19:13.832 "num_base_bdevs_discovered": 1, 00:19:13.832 "num_base_bdevs_operational": 1, 00:19:13.832 "base_bdevs_list": [ 00:19:13.832 { 00:19:13.832 "name": null, 00:19:13.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.832 "is_configured": false, 00:19:13.832 "data_offset": 0, 00:19:13.833 "data_size": 7936 00:19:13.833 }, 00:19:13.833 { 00:19:13.833 "name": "BaseBdev2", 00:19:13.833 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:13.833 "is_configured": true, 00:19:13.833 "data_offset": 256, 00:19:13.833 "data_size": 7936 00:19:13.833 } 00:19:13.833 ] 00:19:13.833 }' 00:19:13.833 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.833 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.092 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.351 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.351 "name": "raid_bdev1", 00:19:14.351 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:14.351 "strip_size_kb": 0, 00:19:14.351 "state": "online", 00:19:14.351 "raid_level": "raid1", 00:19:14.351 "superblock": true, 00:19:14.351 "num_base_bdevs": 2, 00:19:14.351 "num_base_bdevs_discovered": 1, 00:19:14.351 "num_base_bdevs_operational": 1, 00:19:14.351 "base_bdevs_list": [ 00:19:14.351 { 00:19:14.351 "name": null, 00:19:14.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.351 "is_configured": false, 00:19:14.351 "data_offset": 0, 00:19:14.351 "data_size": 7936 00:19:14.351 }, 00:19:14.351 { 00:19:14.351 "name": "BaseBdev2", 00:19:14.351 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:14.351 "is_configured": true, 00:19:14.351 "data_offset": 256, 00:19:14.351 "data_size": 7936 00:19:14.351 } 00:19:14.351 ] 00:19:14.351 }' 00:19:14.351 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.352 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:14.352 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.352 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:14.352 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:14.352 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.352 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.352 [2024-11-20 09:31:39.654627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.352 [2024-11-20 09:31:39.672086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:14.352 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.352 09:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:14.352 [2024-11-20 09:31:39.674021] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.289 "name": "raid_bdev1", 00:19:15.289 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:15.289 "strip_size_kb": 0, 00:19:15.289 "state": "online", 00:19:15.289 "raid_level": "raid1", 00:19:15.289 "superblock": true, 00:19:15.289 "num_base_bdevs": 2, 00:19:15.289 "num_base_bdevs_discovered": 2, 00:19:15.289 "num_base_bdevs_operational": 2, 00:19:15.289 "process": { 00:19:15.289 "type": "rebuild", 00:19:15.289 "target": "spare", 00:19:15.289 "progress": { 00:19:15.289 "blocks": 2560, 00:19:15.289 "percent": 32 00:19:15.289 } 00:19:15.289 }, 00:19:15.289 "base_bdevs_list": [ 00:19:15.289 { 00:19:15.289 "name": "spare", 00:19:15.289 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:15.289 "is_configured": true, 00:19:15.289 "data_offset": 256, 00:19:15.289 "data_size": 7936 00:19:15.289 }, 00:19:15.289 { 00:19:15.289 "name": "BaseBdev2", 00:19:15.289 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:15.289 "is_configured": true, 00:19:15.289 "data_offset": 256, 00:19:15.289 "data_size": 7936 00:19:15.289 } 00:19:15.289 ] 00:19:15.289 }' 00:19:15.289 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:15.549 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=775 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.549 "name": "raid_bdev1", 00:19:15.549 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:15.549 "strip_size_kb": 0, 00:19:15.549 "state": "online", 00:19:15.549 "raid_level": "raid1", 00:19:15.549 "superblock": true, 00:19:15.549 "num_base_bdevs": 2, 00:19:15.549 "num_base_bdevs_discovered": 2, 00:19:15.549 "num_base_bdevs_operational": 2, 00:19:15.549 "process": { 00:19:15.549 "type": "rebuild", 00:19:15.549 "target": "spare", 00:19:15.549 "progress": { 00:19:15.549 "blocks": 2816, 00:19:15.549 "percent": 35 00:19:15.549 } 00:19:15.549 }, 00:19:15.549 "base_bdevs_list": [ 00:19:15.549 { 00:19:15.549 "name": "spare", 00:19:15.549 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:15.549 "is_configured": true, 00:19:15.549 "data_offset": 256, 00:19:15.549 "data_size": 7936 00:19:15.549 }, 00:19:15.549 { 00:19:15.549 "name": "BaseBdev2", 00:19:15.549 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:15.549 "is_configured": true, 00:19:15.549 "data_offset": 256, 00:19:15.549 "data_size": 7936 00:19:15.549 } 00:19:15.549 ] 00:19:15.549 }' 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.549 09:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.928 09:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.928 09:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.928 09:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.928 "name": "raid_bdev1", 00:19:16.928 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:16.928 "strip_size_kb": 0, 00:19:16.928 "state": "online", 00:19:16.928 "raid_level": "raid1", 00:19:16.928 "superblock": true, 00:19:16.928 "num_base_bdevs": 2, 00:19:16.928 "num_base_bdevs_discovered": 2, 00:19:16.928 "num_base_bdevs_operational": 2, 00:19:16.928 "process": { 00:19:16.928 "type": "rebuild", 00:19:16.928 "target": "spare", 00:19:16.928 "progress": { 00:19:16.928 "blocks": 5888, 00:19:16.928 "percent": 74 00:19:16.928 } 00:19:16.928 }, 00:19:16.928 "base_bdevs_list": [ 00:19:16.928 { 00:19:16.928 "name": "spare", 00:19:16.928 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:16.928 "is_configured": true, 00:19:16.928 "data_offset": 256, 00:19:16.928 "data_size": 7936 00:19:16.928 }, 00:19:16.928 { 00:19:16.928 "name": "BaseBdev2", 00:19:16.928 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:16.928 "is_configured": true, 00:19:16.928 "data_offset": 256, 00:19:16.928 "data_size": 7936 00:19:16.928 } 00:19:16.928 ] 00:19:16.928 }' 00:19:16.928 09:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.928 09:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.928 09:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.928 09:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.928 09:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.497 [2024-11-20 09:31:42.788850] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:17.497 [2024-11-20 09:31:42.788935] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:17.497 [2024-11-20 09:31:42.789050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.756 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.756 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.757 "name": "raid_bdev1", 00:19:17.757 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:17.757 "strip_size_kb": 0, 00:19:17.757 "state": "online", 00:19:17.757 "raid_level": "raid1", 00:19:17.757 "superblock": true, 00:19:17.757 "num_base_bdevs": 2, 00:19:17.757 "num_base_bdevs_discovered": 2, 00:19:17.757 "num_base_bdevs_operational": 2, 00:19:17.757 "base_bdevs_list": [ 00:19:17.757 { 00:19:17.757 "name": "spare", 00:19:17.757 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:17.757 "is_configured": true, 00:19:17.757 "data_offset": 256, 00:19:17.757 "data_size": 7936 00:19:17.757 }, 00:19:17.757 { 00:19:17.757 "name": "BaseBdev2", 00:19:17.757 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:17.757 "is_configured": true, 00:19:17.757 "data_offset": 256, 00:19:17.757 "data_size": 7936 00:19:17.757 } 00:19:17.757 ] 00:19:17.757 }' 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.757 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.016 "name": "raid_bdev1", 00:19:18.016 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:18.016 "strip_size_kb": 0, 00:19:18.016 "state": "online", 00:19:18.016 "raid_level": "raid1", 00:19:18.016 "superblock": true, 00:19:18.016 "num_base_bdevs": 2, 00:19:18.016 "num_base_bdevs_discovered": 2, 00:19:18.016 "num_base_bdevs_operational": 2, 00:19:18.016 "base_bdevs_list": [ 00:19:18.016 { 00:19:18.016 "name": "spare", 00:19:18.016 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:18.016 "is_configured": true, 00:19:18.016 "data_offset": 256, 00:19:18.016 "data_size": 7936 00:19:18.016 }, 00:19:18.016 { 00:19:18.016 "name": "BaseBdev2", 00:19:18.016 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:18.016 "is_configured": true, 00:19:18.016 "data_offset": 256, 00:19:18.016 "data_size": 7936 00:19:18.016 } 00:19:18.016 ] 00:19:18.016 }' 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.016 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.017 "name": "raid_bdev1", 00:19:18.017 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:18.017 "strip_size_kb": 0, 00:19:18.017 "state": "online", 00:19:18.017 "raid_level": "raid1", 00:19:18.017 "superblock": true, 00:19:18.017 "num_base_bdevs": 2, 00:19:18.017 "num_base_bdevs_discovered": 2, 00:19:18.017 "num_base_bdevs_operational": 2, 00:19:18.017 "base_bdevs_list": [ 00:19:18.017 { 00:19:18.017 "name": "spare", 00:19:18.017 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:18.017 "is_configured": true, 00:19:18.017 "data_offset": 256, 00:19:18.017 "data_size": 7936 00:19:18.017 }, 00:19:18.017 { 00:19:18.017 "name": "BaseBdev2", 00:19:18.017 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:18.017 "is_configured": true, 00:19:18.017 "data_offset": 256, 00:19:18.017 "data_size": 7936 00:19:18.017 } 00:19:18.017 ] 00:19:18.017 }' 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.017 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.584 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:18.584 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.584 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.584 [2024-11-20 09:31:43.829838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.584 [2024-11-20 09:31:43.829877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.584 [2024-11-20 09:31:43.829974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.584 [2024-11-20 09:31:43.830046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.585 [2024-11-20 09:31:43.830058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.585 [2024-11-20 09:31:43.909656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.585 [2024-11-20 09:31:43.909709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.585 [2024-11-20 09:31:43.909729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:18.585 [2024-11-20 09:31:43.909738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.585 [2024-11-20 09:31:43.911727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.585 [2024-11-20 09:31:43.911762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.585 [2024-11-20 09:31:43.911816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:18.585 [2024-11-20 09:31:43.911879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.585 [2024-11-20 09:31:43.911985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.585 spare 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.585 09:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.585 [2024-11-20 09:31:44.011895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:18.585 [2024-11-20 09:31:44.011940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:18.585 [2024-11-20 09:31:44.012065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:18.585 [2024-11-20 09:31:44.012169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:18.585 [2024-11-20 09:31:44.012178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:18.585 [2024-11-20 09:31:44.012271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.585 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.844 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.844 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.844 "name": "raid_bdev1", 00:19:18.844 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:18.844 "strip_size_kb": 0, 00:19:18.844 "state": "online", 00:19:18.844 "raid_level": "raid1", 00:19:18.844 "superblock": true, 00:19:18.844 "num_base_bdevs": 2, 00:19:18.844 "num_base_bdevs_discovered": 2, 00:19:18.844 "num_base_bdevs_operational": 2, 00:19:18.844 "base_bdevs_list": [ 00:19:18.844 { 00:19:18.844 "name": "spare", 00:19:18.844 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:18.844 "is_configured": true, 00:19:18.844 "data_offset": 256, 00:19:18.844 "data_size": 7936 00:19:18.844 }, 00:19:18.844 { 00:19:18.844 "name": "BaseBdev2", 00:19:18.844 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:18.844 "is_configured": true, 00:19:18.844 "data_offset": 256, 00:19:18.845 "data_size": 7936 00:19:18.845 } 00:19:18.845 ] 00:19:18.845 }' 00:19:18.845 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.845 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.103 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.103 "name": "raid_bdev1", 00:19:19.103 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:19.103 "strip_size_kb": 0, 00:19:19.104 "state": "online", 00:19:19.104 "raid_level": "raid1", 00:19:19.104 "superblock": true, 00:19:19.104 "num_base_bdevs": 2, 00:19:19.104 "num_base_bdevs_discovered": 2, 00:19:19.104 "num_base_bdevs_operational": 2, 00:19:19.104 "base_bdevs_list": [ 00:19:19.104 { 00:19:19.104 "name": "spare", 00:19:19.104 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:19.104 "is_configured": true, 00:19:19.104 "data_offset": 256, 00:19:19.104 "data_size": 7936 00:19:19.104 }, 00:19:19.104 { 00:19:19.104 "name": "BaseBdev2", 00:19:19.104 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:19.104 "is_configured": true, 00:19:19.104 "data_offset": 256, 00:19:19.104 "data_size": 7936 00:19:19.104 } 00:19:19.104 ] 00:19:19.104 }' 00:19:19.104 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.104 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:19.104 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.363 [2024-11-20 09:31:44.648541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.363 "name": "raid_bdev1", 00:19:19.363 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:19.363 "strip_size_kb": 0, 00:19:19.363 "state": "online", 00:19:19.363 "raid_level": "raid1", 00:19:19.363 "superblock": true, 00:19:19.363 "num_base_bdevs": 2, 00:19:19.363 "num_base_bdevs_discovered": 1, 00:19:19.363 "num_base_bdevs_operational": 1, 00:19:19.363 "base_bdevs_list": [ 00:19:19.363 { 00:19:19.363 "name": null, 00:19:19.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.363 "is_configured": false, 00:19:19.363 "data_offset": 0, 00:19:19.363 "data_size": 7936 00:19:19.363 }, 00:19:19.363 { 00:19:19.363 "name": "BaseBdev2", 00:19:19.363 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:19.363 "is_configured": true, 00:19:19.363 "data_offset": 256, 00:19:19.363 "data_size": 7936 00:19:19.363 } 00:19:19.363 ] 00:19:19.363 }' 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.363 09:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.623 09:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.623 09:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.623 09:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.623 [2024-11-20 09:31:45.075864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.623 [2024-11-20 09:31:45.076079] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.623 [2024-11-20 09:31:45.076099] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:19.623 [2024-11-20 09:31:45.076140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.883 [2024-11-20 09:31:45.093927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:19.883 09:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.883 09:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:19.883 [2024-11-20 09:31:45.096039] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.823 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.823 "name": "raid_bdev1", 00:19:20.823 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:20.823 "strip_size_kb": 0, 00:19:20.823 "state": "online", 00:19:20.823 "raid_level": "raid1", 00:19:20.823 "superblock": true, 00:19:20.823 "num_base_bdevs": 2, 00:19:20.823 "num_base_bdevs_discovered": 2, 00:19:20.823 "num_base_bdevs_operational": 2, 00:19:20.823 "process": { 00:19:20.823 "type": "rebuild", 00:19:20.823 "target": "spare", 00:19:20.823 "progress": { 00:19:20.823 "blocks": 2560, 00:19:20.823 "percent": 32 00:19:20.823 } 00:19:20.823 }, 00:19:20.823 "base_bdevs_list": [ 00:19:20.823 { 00:19:20.824 "name": "spare", 00:19:20.824 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:20.824 "is_configured": true, 00:19:20.824 "data_offset": 256, 00:19:20.824 "data_size": 7936 00:19:20.824 }, 00:19:20.824 { 00:19:20.824 "name": "BaseBdev2", 00:19:20.824 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:20.824 "is_configured": true, 00:19:20.824 "data_offset": 256, 00:19:20.824 "data_size": 7936 00:19:20.824 } 00:19:20.824 ] 00:19:20.824 }' 00:19:20.824 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.824 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.824 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.824 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.824 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.824 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.824 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.824 [2024-11-20 09:31:46.259585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.083 [2024-11-20 09:31:46.301943] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:21.083 [2024-11-20 09:31:46.302048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.083 [2024-11-20 09:31:46.302066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.083 [2024-11-20 09:31:46.302078] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.083 "name": "raid_bdev1", 00:19:21.083 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:21.083 "strip_size_kb": 0, 00:19:21.083 "state": "online", 00:19:21.083 "raid_level": "raid1", 00:19:21.083 "superblock": true, 00:19:21.083 "num_base_bdevs": 2, 00:19:21.083 "num_base_bdevs_discovered": 1, 00:19:21.083 "num_base_bdevs_operational": 1, 00:19:21.083 "base_bdevs_list": [ 00:19:21.083 { 00:19:21.083 "name": null, 00:19:21.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.083 "is_configured": false, 00:19:21.083 "data_offset": 0, 00:19:21.083 "data_size": 7936 00:19:21.083 }, 00:19:21.083 { 00:19:21.083 "name": "BaseBdev2", 00:19:21.083 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:21.083 "is_configured": true, 00:19:21.083 "data_offset": 256, 00:19:21.083 "data_size": 7936 00:19:21.083 } 00:19:21.083 ] 00:19:21.083 }' 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.083 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.342 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:21.342 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.342 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.342 [2024-11-20 09:31:46.769959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.342 [2024-11-20 09:31:46.770034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.342 [2024-11-20 09:31:46.770060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:21.342 [2024-11-20 09:31:46.770071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.342 [2024-11-20 09:31:46.770276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.342 [2024-11-20 09:31:46.770295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.342 [2024-11-20 09:31:46.770354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:21.342 [2024-11-20 09:31:46.770368] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.342 [2024-11-20 09:31:46.770379] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:21.342 [2024-11-20 09:31:46.770411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.342 [2024-11-20 09:31:46.788696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:21.342 spare 00:19:21.342 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.342 09:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:21.342 [2024-11-20 09:31:46.790728] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.722 "name": "raid_bdev1", 00:19:22.722 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:22.722 "strip_size_kb": 0, 00:19:22.722 "state": "online", 00:19:22.722 "raid_level": "raid1", 00:19:22.722 "superblock": true, 00:19:22.722 "num_base_bdevs": 2, 00:19:22.722 "num_base_bdevs_discovered": 2, 00:19:22.722 "num_base_bdevs_operational": 2, 00:19:22.722 "process": { 00:19:22.722 "type": "rebuild", 00:19:22.722 "target": "spare", 00:19:22.722 "progress": { 00:19:22.722 "blocks": 2560, 00:19:22.722 "percent": 32 00:19:22.722 } 00:19:22.722 }, 00:19:22.722 "base_bdevs_list": [ 00:19:22.722 { 00:19:22.722 "name": "spare", 00:19:22.722 "uuid": "ef7eaa82-31d4-53d6-9c18-a8e8dc1cb41d", 00:19:22.722 "is_configured": true, 00:19:22.722 "data_offset": 256, 00:19:22.722 "data_size": 7936 00:19:22.722 }, 00:19:22.722 { 00:19:22.722 "name": "BaseBdev2", 00:19:22.722 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:22.722 "is_configured": true, 00:19:22.722 "data_offset": 256, 00:19:22.722 "data_size": 7936 00:19:22.722 } 00:19:22.722 ] 00:19:22.722 }' 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.722 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.723 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.723 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:22.723 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.723 09:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.723 [2024-11-20 09:31:47.930223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.723 [2024-11-20 09:31:47.996389] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.723 [2024-11-20 09:31:47.996459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.723 [2024-11-20 09:31:47.996477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.723 [2024-11-20 09:31:47.996485] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.723 "name": "raid_bdev1", 00:19:22.723 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:22.723 "strip_size_kb": 0, 00:19:22.723 "state": "online", 00:19:22.723 "raid_level": "raid1", 00:19:22.723 "superblock": true, 00:19:22.723 "num_base_bdevs": 2, 00:19:22.723 "num_base_bdevs_discovered": 1, 00:19:22.723 "num_base_bdevs_operational": 1, 00:19:22.723 "base_bdevs_list": [ 00:19:22.723 { 00:19:22.723 "name": null, 00:19:22.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.723 "is_configured": false, 00:19:22.723 "data_offset": 0, 00:19:22.723 "data_size": 7936 00:19:22.723 }, 00:19:22.723 { 00:19:22.723 "name": "BaseBdev2", 00:19:22.723 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:22.723 "is_configured": true, 00:19:22.723 "data_offset": 256, 00:19:22.723 "data_size": 7936 00:19:22.723 } 00:19:22.723 ] 00:19:22.723 }' 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.723 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.291 "name": "raid_bdev1", 00:19:23.291 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:23.291 "strip_size_kb": 0, 00:19:23.291 "state": "online", 00:19:23.291 "raid_level": "raid1", 00:19:23.291 "superblock": true, 00:19:23.291 "num_base_bdevs": 2, 00:19:23.291 "num_base_bdevs_discovered": 1, 00:19:23.291 "num_base_bdevs_operational": 1, 00:19:23.291 "base_bdevs_list": [ 00:19:23.291 { 00:19:23.291 "name": null, 00:19:23.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.291 "is_configured": false, 00:19:23.291 "data_offset": 0, 00:19:23.291 "data_size": 7936 00:19:23.291 }, 00:19:23.291 { 00:19:23.291 "name": "BaseBdev2", 00:19:23.291 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:23.291 "is_configured": true, 00:19:23.291 "data_offset": 256, 00:19:23.291 "data_size": 7936 00:19:23.291 } 00:19:23.291 ] 00:19:23.291 }' 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.291 [2024-11-20 09:31:48.630898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:23.291 [2024-11-20 09:31:48.630966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.291 [2024-11-20 09:31:48.630991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:23.291 [2024-11-20 09:31:48.630999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.291 [2024-11-20 09:31:48.631175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.291 [2024-11-20 09:31:48.631187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:23.291 [2024-11-20 09:31:48.631242] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:23.291 [2024-11-20 09:31:48.631254] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.291 [2024-11-20 09:31:48.631264] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:23.291 [2024-11-20 09:31:48.631274] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:23.291 BaseBdev1 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.291 09:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.227 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.486 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.486 "name": "raid_bdev1", 00:19:24.486 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:24.486 "strip_size_kb": 0, 00:19:24.486 "state": "online", 00:19:24.486 "raid_level": "raid1", 00:19:24.486 "superblock": true, 00:19:24.486 "num_base_bdevs": 2, 00:19:24.486 "num_base_bdevs_discovered": 1, 00:19:24.486 "num_base_bdevs_operational": 1, 00:19:24.486 "base_bdevs_list": [ 00:19:24.486 { 00:19:24.486 "name": null, 00:19:24.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.486 "is_configured": false, 00:19:24.486 "data_offset": 0, 00:19:24.486 "data_size": 7936 00:19:24.486 }, 00:19:24.486 { 00:19:24.486 "name": "BaseBdev2", 00:19:24.486 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:24.486 "is_configured": true, 00:19:24.486 "data_offset": 256, 00:19:24.486 "data_size": 7936 00:19:24.486 } 00:19:24.486 ] 00:19:24.486 }' 00:19:24.486 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.486 09:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.745 "name": "raid_bdev1", 00:19:24.745 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:24.745 "strip_size_kb": 0, 00:19:24.745 "state": "online", 00:19:24.745 "raid_level": "raid1", 00:19:24.745 "superblock": true, 00:19:24.745 "num_base_bdevs": 2, 00:19:24.745 "num_base_bdevs_discovered": 1, 00:19:24.745 "num_base_bdevs_operational": 1, 00:19:24.745 "base_bdevs_list": [ 00:19:24.745 { 00:19:24.745 "name": null, 00:19:24.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.745 "is_configured": false, 00:19:24.745 "data_offset": 0, 00:19:24.745 "data_size": 7936 00:19:24.745 }, 00:19:24.745 { 00:19:24.745 "name": "BaseBdev2", 00:19:24.745 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:24.745 "is_configured": true, 00:19:24.745 "data_offset": 256, 00:19:24.745 "data_size": 7936 00:19:24.745 } 00:19:24.745 ] 00:19:24.745 }' 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.745 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.004 [2024-11-20 09:31:50.224269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.004 [2024-11-20 09:31:50.224436] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:25.004 [2024-11-20 09:31:50.224467] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:25.004 request: 00:19:25.004 { 00:19:25.004 "base_bdev": "BaseBdev1", 00:19:25.004 "raid_bdev": "raid_bdev1", 00:19:25.004 "method": "bdev_raid_add_base_bdev", 00:19:25.004 "req_id": 1 00:19:25.004 } 00:19:25.004 Got JSON-RPC error response 00:19:25.004 response: 00:19:25.004 { 00:19:25.004 "code": -22, 00:19:25.004 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:25.004 } 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.004 09:31:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.940 "name": "raid_bdev1", 00:19:25.940 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:25.940 "strip_size_kb": 0, 00:19:25.940 "state": "online", 00:19:25.940 "raid_level": "raid1", 00:19:25.940 "superblock": true, 00:19:25.940 "num_base_bdevs": 2, 00:19:25.940 "num_base_bdevs_discovered": 1, 00:19:25.940 "num_base_bdevs_operational": 1, 00:19:25.940 "base_bdevs_list": [ 00:19:25.940 { 00:19:25.940 "name": null, 00:19:25.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.940 "is_configured": false, 00:19:25.940 "data_offset": 0, 00:19:25.940 "data_size": 7936 00:19:25.940 }, 00:19:25.940 { 00:19:25.940 "name": "BaseBdev2", 00:19:25.940 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:25.940 "is_configured": true, 00:19:25.940 "data_offset": 256, 00:19:25.940 "data_size": 7936 00:19:25.940 } 00:19:25.940 ] 00:19:25.940 }' 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.940 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.507 "name": "raid_bdev1", 00:19:26.507 "uuid": "f9f2f816-afc6-41cd-98fa-d5ad67b3cd88", 00:19:26.507 "strip_size_kb": 0, 00:19:26.507 "state": "online", 00:19:26.507 "raid_level": "raid1", 00:19:26.507 "superblock": true, 00:19:26.507 "num_base_bdevs": 2, 00:19:26.507 "num_base_bdevs_discovered": 1, 00:19:26.507 "num_base_bdevs_operational": 1, 00:19:26.507 "base_bdevs_list": [ 00:19:26.507 { 00:19:26.507 "name": null, 00:19:26.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.507 "is_configured": false, 00:19:26.507 "data_offset": 0, 00:19:26.507 "data_size": 7936 00:19:26.507 }, 00:19:26.507 { 00:19:26.507 "name": "BaseBdev2", 00:19:26.507 "uuid": "a8f14410-80c4-5cef-9de1-1f5f4d0677de", 00:19:26.507 "is_configured": true, 00:19:26.507 "data_offset": 256, 00:19:26.507 "data_size": 7936 00:19:26.507 } 00:19:26.507 ] 00:19:26.507 }' 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89500 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89500 ']' 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89500 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89500 00:19:26.507 killing process with pid 89500 00:19:26.507 Received shutdown signal, test time was about 60.000000 seconds 00:19:26.507 00:19:26.507 Latency(us) 00:19:26.507 [2024-11-20T09:31:51.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.507 [2024-11-20T09:31:51.963Z] =================================================================================================================== 00:19:26.507 [2024-11-20T09:31:51.963Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89500' 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89500 00:19:26.507 [2024-11-20 09:31:51.903489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.507 [2024-11-20 09:31:51.903626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.507 [2024-11-20 09:31:51.903675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.507 [2024-11-20 09:31:51.903687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:26.507 09:31:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89500 00:19:26.766 [2024-11-20 09:31:52.211838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:28.153 09:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:28.153 00:19:28.153 real 0m17.657s 00:19:28.153 user 0m23.097s 00:19:28.153 sys 0m1.741s 00:19:28.153 09:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.153 09:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.153 ************************************ 00:19:28.153 END TEST raid_rebuild_test_sb_md_interleaved 00:19:28.153 ************************************ 00:19:28.153 09:31:53 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:28.153 09:31:53 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:28.153 09:31:53 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89500 ']' 00:19:28.153 09:31:53 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89500 00:19:28.153 09:31:53 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:28.153 00:19:28.153 real 12m37.786s 00:19:28.153 user 17m3.360s 00:19:28.153 sys 1m59.218s 00:19:28.153 09:31:53 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.153 09:31:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.153 ************************************ 00:19:28.153 END TEST bdev_raid 00:19:28.153 ************************************ 00:19:28.153 09:31:53 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:28.153 09:31:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:28.153 09:31:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.153 09:31:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.153 ************************************ 00:19:28.153 START TEST spdkcli_raid 00:19:28.153 ************************************ 00:19:28.153 09:31:53 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:28.411 * Looking for test storage... 00:19:28.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:28.411 09:31:53 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:28.411 09:31:53 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:28.411 09:31:53 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:28.411 09:31:53 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.412 09:31:53 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:28.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.412 --rc genhtml_branch_coverage=1 00:19:28.412 --rc genhtml_function_coverage=1 00:19:28.412 --rc genhtml_legend=1 00:19:28.412 --rc geninfo_all_blocks=1 00:19:28.412 --rc geninfo_unexecuted_blocks=1 00:19:28.412 00:19:28.412 ' 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:28.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.412 --rc genhtml_branch_coverage=1 00:19:28.412 --rc genhtml_function_coverage=1 00:19:28.412 --rc genhtml_legend=1 00:19:28.412 --rc geninfo_all_blocks=1 00:19:28.412 --rc geninfo_unexecuted_blocks=1 00:19:28.412 00:19:28.412 ' 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:28.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.412 --rc genhtml_branch_coverage=1 00:19:28.412 --rc genhtml_function_coverage=1 00:19:28.412 --rc genhtml_legend=1 00:19:28.412 --rc geninfo_all_blocks=1 00:19:28.412 --rc geninfo_unexecuted_blocks=1 00:19:28.412 00:19:28.412 ' 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:28.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.412 --rc genhtml_branch_coverage=1 00:19:28.412 --rc genhtml_function_coverage=1 00:19:28.412 --rc genhtml_legend=1 00:19:28.412 --rc geninfo_all_blocks=1 00:19:28.412 --rc geninfo_unexecuted_blocks=1 00:19:28.412 00:19:28.412 ' 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:28.412 09:31:53 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90175 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:28.412 09:31:53 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90175 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90175 ']' 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.412 09:31:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.412 [2024-11-20 09:31:53.854003] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:19:28.412 [2024-11-20 09:31:53.854194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90175 ] 00:19:28.671 [2024-11-20 09:31:54.027655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.929 [2024-11-20 09:31:54.142502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.929 [2024-11-20 09:31:54.142552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.870 09:31:54 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.870 09:31:54 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:29.870 09:31:54 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:29.870 09:31:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.870 09:31:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.870 09:31:55 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:29.870 09:31:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.870 09:31:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.870 09:31:55 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:29.870 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:29.870 ' 00:19:31.252 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:31.252 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:31.512 09:31:56 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:31.512 09:31:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.512 09:31:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.512 09:31:56 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:31.512 09:31:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.512 09:31:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.512 09:31:56 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:31.512 ' 00:19:32.451 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:32.711 09:31:57 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:32.711 09:31:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.711 09:31:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.711 09:31:58 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:32.711 09:31:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.711 09:31:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.711 09:31:58 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:32.711 09:31:58 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:33.278 09:31:58 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:33.278 09:31:58 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:33.278 09:31:58 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:33.278 09:31:58 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.278 09:31:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.278 09:31:58 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:33.278 09:31:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.278 09:31:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.278 09:31:58 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:33.278 ' 00:19:34.222 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:34.481 09:31:59 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:34.481 09:31:59 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.481 09:31:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.481 09:31:59 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:34.481 09:31:59 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.481 09:31:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.481 09:31:59 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:34.481 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:34.481 ' 00:19:35.863 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:35.863 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:35.863 09:32:01 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:35.863 09:32:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.863 09:32:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:36.123 09:32:01 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90175 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90175 ']' 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90175 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90175 00:19:36.123 killing process with pid 90175 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90175' 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90175 00:19:36.123 09:32:01 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90175 00:19:38.662 09:32:03 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:38.662 09:32:03 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90175 ']' 00:19:38.662 09:32:03 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90175 00:19:38.662 09:32:03 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90175 ']' 00:19:38.662 Process with pid 90175 is not found 00:19:38.662 09:32:03 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90175 00:19:38.662 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90175) - No such process 00:19:38.662 09:32:03 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90175 is not found' 00:19:38.662 09:32:03 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:38.662 09:32:03 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:38.662 09:32:03 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:38.662 09:32:03 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:38.662 00:19:38.662 real 0m10.297s 00:19:38.662 user 0m21.342s 00:19:38.662 sys 0m1.158s 00:19:38.662 09:32:03 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.662 09:32:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.662 ************************************ 00:19:38.662 END TEST spdkcli_raid 00:19:38.662 ************************************ 00:19:38.662 09:32:03 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:38.662 09:32:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.662 09:32:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.662 09:32:03 -- common/autotest_common.sh@10 -- # set +x 00:19:38.662 ************************************ 00:19:38.662 START TEST blockdev_raid5f 00:19:38.662 ************************************ 00:19:38.662 09:32:03 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:38.662 * Looking for test storage... 00:19:38.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:38.662 09:32:03 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:38.662 09:32:03 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:38.662 09:32:03 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:38.662 09:32:04 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.662 09:32:04 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:38.663 09:32:04 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.663 09:32:04 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.663 --rc genhtml_branch_coverage=1 00:19:38.663 --rc genhtml_function_coverage=1 00:19:38.663 --rc genhtml_legend=1 00:19:38.663 --rc geninfo_all_blocks=1 00:19:38.663 --rc geninfo_unexecuted_blocks=1 00:19:38.663 00:19:38.663 ' 00:19:38.663 09:32:04 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.663 --rc genhtml_branch_coverage=1 00:19:38.663 --rc genhtml_function_coverage=1 00:19:38.663 --rc genhtml_legend=1 00:19:38.663 --rc geninfo_all_blocks=1 00:19:38.663 --rc geninfo_unexecuted_blocks=1 00:19:38.663 00:19:38.663 ' 00:19:38.663 09:32:04 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.663 --rc genhtml_branch_coverage=1 00:19:38.663 --rc genhtml_function_coverage=1 00:19:38.663 --rc genhtml_legend=1 00:19:38.663 --rc geninfo_all_blocks=1 00:19:38.663 --rc geninfo_unexecuted_blocks=1 00:19:38.663 00:19:38.663 ' 00:19:38.663 09:32:04 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:38.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.663 --rc genhtml_branch_coverage=1 00:19:38.663 --rc genhtml_function_coverage=1 00:19:38.663 --rc genhtml_legend=1 00:19:38.663 --rc geninfo_all_blocks=1 00:19:38.663 --rc geninfo_unexecuted_blocks=1 00:19:38.663 00:19:38.663 ' 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:38.663 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:38.936 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90451 00:19:38.936 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:38.936 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:38.936 09:32:04 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90451 00:19:38.936 09:32:04 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90451 ']' 00:19:38.936 09:32:04 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.936 09:32:04 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.936 09:32:04 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.936 09:32:04 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.936 09:32:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:38.936 [2024-11-20 09:32:04.200625] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:19:38.936 [2024-11-20 09:32:04.200798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90451 ] 00:19:38.936 [2024-11-20 09:32:04.374726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.211 [2024-11-20 09:32:04.489222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.151 Malloc0 00:19:40.151 Malloc1 00:19:40.151 Malloc2 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:40.151 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.151 09:32:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.411 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:40.411 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "12c8cccd-f811-4ff6-83ea-7b2cea3443e0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "12c8cccd-f811-4ff6-83ea-7b2cea3443e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "12c8cccd-f811-4ff6-83ea-7b2cea3443e0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2b627650-1c2c-44c3-9e83-a69d882cd00d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "faa55cf2-55b4-4de7-a76d-c0502462ca36",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "61d0584a-32c5-4324-93ed-018182488560",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:40.411 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:40.411 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:40.411 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:40.411 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:40.411 09:32:05 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90451 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90451 ']' 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90451 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90451 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90451' 00:19:40.411 killing process with pid 90451 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90451 00:19:40.411 09:32:05 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90451 00:19:42.976 09:32:08 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:42.976 09:32:08 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:42.976 09:32:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:42.976 09:32:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.976 09:32:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.236 ************************************ 00:19:43.236 START TEST bdev_hello_world 00:19:43.236 ************************************ 00:19:43.236 09:32:08 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:43.236 [2024-11-20 09:32:08.529874] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:19:43.236 [2024-11-20 09:32:08.530045] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90524 ] 00:19:43.496 [2024-11-20 09:32:08.712388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.496 [2024-11-20 09:32:08.829749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.063 [2024-11-20 09:32:09.354162] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:44.063 [2024-11-20 09:32:09.354212] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:44.063 [2024-11-20 09:32:09.354230] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:44.063 [2024-11-20 09:32:09.354783] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:44.063 [2024-11-20 09:32:09.354943] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:44.063 [2024-11-20 09:32:09.354961] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:44.063 [2024-11-20 09:32:09.355014] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:44.063 00:19:44.063 [2024-11-20 09:32:09.355034] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:45.444 00:19:45.444 real 0m2.377s 00:19:45.444 user 0m1.997s 00:19:45.444 sys 0m0.259s 00:19:45.444 ************************************ 00:19:45.444 END TEST bdev_hello_world 00:19:45.444 ************************************ 00:19:45.444 09:32:10 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.444 09:32:10 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:45.444 09:32:10 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:45.444 09:32:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.444 09:32:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.444 09:32:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.444 ************************************ 00:19:45.444 START TEST bdev_bounds 00:19:45.444 ************************************ 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90566 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90566' 00:19:45.444 Process bdevio pid: 90566 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90566 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90566 ']' 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.444 09:32:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:45.702 [2024-11-20 09:32:10.973445] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:19:45.702 [2024-11-20 09:32:10.973581] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90566 ] 00:19:45.702 [2024-11-20 09:32:11.150065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:45.961 [2024-11-20 09:32:11.271637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.961 [2024-11-20 09:32:11.271800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.961 [2024-11-20 09:32:11.271851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.531 09:32:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.531 09:32:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:46.531 09:32:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:46.531 I/O targets: 00:19:46.531 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:46.531 00:19:46.531 00:19:46.531 CUnit - A unit testing framework for C - Version 2.1-3 00:19:46.531 http://cunit.sourceforge.net/ 00:19:46.531 00:19:46.531 00:19:46.531 Suite: bdevio tests on: raid5f 00:19:46.531 Test: blockdev write read block ...passed 00:19:46.531 Test: blockdev write zeroes read block ...passed 00:19:46.531 Test: blockdev write zeroes read no split ...passed 00:19:46.798 Test: blockdev write zeroes read split ...passed 00:19:46.798 Test: blockdev write zeroes read split partial ...passed 00:19:46.798 Test: blockdev reset ...passed 00:19:46.798 Test: blockdev write read 8 blocks ...passed 00:19:46.798 Test: blockdev write read size > 128k ...passed 00:19:46.798 Test: blockdev write read invalid size ...passed 00:19:46.798 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:46.798 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:46.798 Test: blockdev write read max offset ...passed 00:19:46.798 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:46.798 Test: blockdev writev readv 8 blocks ...passed 00:19:46.798 Test: blockdev writev readv 30 x 1block ...passed 00:19:46.798 Test: blockdev writev readv block ...passed 00:19:46.798 Test: blockdev writev readv size > 128k ...passed 00:19:46.798 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:46.798 Test: blockdev comparev and writev ...passed 00:19:46.798 Test: blockdev nvme passthru rw ...passed 00:19:46.798 Test: blockdev nvme passthru vendor specific ...passed 00:19:46.798 Test: blockdev nvme admin passthru ...passed 00:19:46.798 Test: blockdev copy ...passed 00:19:46.798 00:19:46.798 Run Summary: Type Total Ran Passed Failed Inactive 00:19:46.798 suites 1 1 n/a 0 0 00:19:46.798 tests 23 23 23 0 0 00:19:46.798 asserts 130 130 130 0 n/a 00:19:46.798 00:19:46.798 Elapsed time = 0.679 seconds 00:19:46.798 0 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90566 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90566 ']' 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90566 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90566 00:19:47.058 killing process with pid 90566 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90566' 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90566 00:19:47.058 09:32:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90566 00:19:48.439 09:32:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:48.439 00:19:48.439 real 0m2.858s 00:19:48.439 user 0m7.125s 00:19:48.439 sys 0m0.371s 00:19:48.439 09:32:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.439 09:32:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:48.439 ************************************ 00:19:48.439 END TEST bdev_bounds 00:19:48.439 ************************************ 00:19:48.439 09:32:13 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:48.439 09:32:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:48.439 09:32:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.439 09:32:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:48.439 ************************************ 00:19:48.439 START TEST bdev_nbd 00:19:48.439 ************************************ 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90631 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:48.439 09:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90631 /var/tmp/spdk-nbd.sock 00:19:48.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:48.440 09:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90631 ']' 00:19:48.440 09:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:48.440 09:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.440 09:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:48.440 09:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.440 09:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:48.699 [2024-11-20 09:32:13.916251] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:19:48.699 [2024-11-20 09:32:13.916405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.699 [2024-11-20 09:32:14.084405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.958 [2024-11-20 09:32:14.206743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:49.527 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:49.790 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:49.790 09:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:49.790 1+0 records in 00:19:49.790 1+0 records out 00:19:49.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421903 s, 9.7 MB/s 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:49.790 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:50.050 { 00:19:50.050 "nbd_device": "/dev/nbd0", 00:19:50.050 "bdev_name": "raid5f" 00:19:50.050 } 00:19:50.050 ]' 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:50.050 { 00:19:50.050 "nbd_device": "/dev/nbd0", 00:19:50.050 "bdev_name": "raid5f" 00:19:50.050 } 00:19:50.050 ]' 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:50.050 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.309 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:50.568 09:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:50.828 /dev/nbd0 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:50.828 1+0 records in 00:19:50.828 1+0 records out 00:19:50.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411973 s, 9.9 MB/s 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.828 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:51.088 { 00:19:51.088 "nbd_device": "/dev/nbd0", 00:19:51.088 "bdev_name": "raid5f" 00:19:51.088 } 00:19:51.088 ]' 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:51.088 { 00:19:51.088 "nbd_device": "/dev/nbd0", 00:19:51.088 "bdev_name": "raid5f" 00:19:51.088 } 00:19:51.088 ]' 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:51.088 256+0 records in 00:19:51.088 256+0 records out 00:19:51.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124632 s, 84.1 MB/s 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:51.088 256+0 records in 00:19:51.088 256+0 records out 00:19:51.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033497 s, 31.3 MB/s 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:51.088 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.348 09:32:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:51.607 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:51.607 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:51.607 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:51.867 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:51.867 malloc_lvol_verify 00:19:52.125 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:52.125 b5a8f7bf-4509-4c6f-be2b-4297880c6609 00:19:52.125 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:52.384 d10ffed4-853a-42f7-8196-b6931eed8d9b 00:19:52.384 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:52.644 /dev/nbd0 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:52.644 mke2fs 1.47.0 (5-Feb-2023) 00:19:52.644 Discarding device blocks: 0/4096 done 00:19:52.644 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:52.644 00:19:52.644 Allocating group tables: 0/1 done 00:19:52.644 Writing inode tables: 0/1 done 00:19:52.644 Creating journal (1024 blocks): done 00:19:52.644 Writing superblocks and filesystem accounting information: 0/1 done 00:19:52.644 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:52.644 09:32:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90631 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90631 ']' 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90631 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90631 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.905 killing process with pid 90631 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90631' 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90631 00:19:52.905 09:32:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90631 00:19:54.304 09:32:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:54.304 ************************************ 00:19:54.304 END TEST bdev_nbd 00:19:54.304 ************************************ 00:19:54.304 00:19:54.304 real 0m5.923s 00:19:54.304 user 0m8.107s 00:19:54.304 sys 0m1.350s 00:19:54.304 09:32:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.304 09:32:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:54.565 09:32:19 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:54.565 09:32:19 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:54.565 09:32:19 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:54.565 09:32:19 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:54.565 09:32:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:54.565 09:32:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.565 09:32:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:54.565 ************************************ 00:19:54.565 START TEST bdev_fio 00:19:54.565 ************************************ 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:54.565 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:54.565 ************************************ 00:19:54.565 START TEST bdev_fio_rw_verify 00:19:54.565 ************************************ 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:54.565 09:32:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:54.825 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:54.825 fio-3.35 00:19:54.825 Starting 1 thread 00:20:07.056 00:20:07.056 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90845: Wed Nov 20 09:32:31 2024 00:20:07.056 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(417MiB/10001msec) 00:20:07.056 slat (nsec): min=18067, max=74022, avg=22380.01, stdev=3209.11 00:20:07.056 clat (usec): min=10, max=480, avg=149.72, stdev=55.42 00:20:07.056 lat (usec): min=30, max=521, avg=172.10, stdev=56.29 00:20:07.056 clat percentiles (usec): 00:20:07.056 | 50.000th=[ 147], 99.000th=[ 273], 99.900th=[ 306], 99.990th=[ 363], 00:20:07.056 | 99.999th=[ 433] 00:20:07.056 write: IOPS=11.2k, BW=43.9MiB/s (46.0MB/s)(433MiB/9875msec); 0 zone resets 00:20:07.056 slat (usec): min=7, max=1725, avg=18.96, stdev= 6.74 00:20:07.056 clat (usec): min=60, max=2254, avg=341.47, stdev=57.72 00:20:07.056 lat (usec): min=76, max=2277, avg=360.43, stdev=59.77 00:20:07.056 clat percentiles (usec): 00:20:07.056 | 50.000th=[ 338], 99.000th=[ 478], 99.900th=[ 644], 99.990th=[ 971], 00:20:07.056 | 99.999th=[ 2245] 00:20:07.056 bw ( KiB/s): min=38840, max=49464, per=99.01%, avg=44466.53, stdev=2629.63, samples=19 00:20:07.056 iops : min= 9710, max=12366, avg=11116.63, stdev=657.41, samples=19 00:20:07.056 lat (usec) : 20=0.01%, 50=0.01%, 100=11.24%, 250=38.31%, 500=50.19% 00:20:07.056 lat (usec) : 750=0.23%, 1000=0.02% 00:20:07.056 lat (msec) : 2=0.01%, 4=0.01% 00:20:07.056 cpu : usr=98.89%, sys=0.46%, ctx=67, majf=0, minf=8950 00:20:07.056 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.056 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.056 issued rwts: total=106676,110875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.056 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:07.056 00:20:07.056 Run status group 0 (all jobs): 00:20:07.056 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=417MiB (437MB), run=10001-10001msec 00:20:07.056 WRITE: bw=43.9MiB/s (46.0MB/s), 43.9MiB/s-43.9MiB/s (46.0MB/s-46.0MB/s), io=433MiB (454MB), run=9875-9875msec 00:20:07.623 ----------------------------------------------------- 00:20:07.623 Suppressions used: 00:20:07.623 count bytes template 00:20:07.623 1 7 /usr/src/fio/parse.c 00:20:07.623 872 83712 /usr/src/fio/iolog.c 00:20:07.623 1 8 libtcmalloc_minimal.so 00:20:07.623 1 904 libcrypto.so 00:20:07.623 ----------------------------------------------------- 00:20:07.623 00:20:07.623 00:20:07.623 real 0m12.956s 00:20:07.623 user 0m12.962s 00:20:07.623 sys 0m0.615s 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:07.623 ************************************ 00:20:07.623 END TEST bdev_fio_rw_verify 00:20:07.623 ************************************ 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "12c8cccd-f811-4ff6-83ea-7b2cea3443e0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "12c8cccd-f811-4ff6-83ea-7b2cea3443e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "12c8cccd-f811-4ff6-83ea-7b2cea3443e0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2b627650-1c2c-44c3-9e83-a69d882cd00d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "faa55cf2-55b4-4de7-a76d-c0502462ca36",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "61d0584a-32c5-4324-93ed-018182488560",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:07.623 09:32:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:07.623 09:32:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:07.623 09:32:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:07.623 /home/vagrant/spdk_repo/spdk 00:20:07.623 09:32:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:07.623 09:32:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:07.623 09:32:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:07.623 00:20:07.623 real 0m13.236s 00:20:07.623 user 0m13.080s 00:20:07.623 sys 0m0.747s 00:20:07.623 09:32:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.623 09:32:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:07.623 ************************************ 00:20:07.623 END TEST bdev_fio 00:20:07.623 ************************************ 00:20:07.882 09:32:33 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:07.882 09:32:33 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:07.882 09:32:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:07.882 09:32:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.882 09:32:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:07.882 ************************************ 00:20:07.882 START TEST bdev_verify 00:20:07.882 ************************************ 00:20:07.882 09:32:33 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:07.882 [2024-11-20 09:32:33.193603] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:20:07.882 [2024-11-20 09:32:33.193775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91009 ] 00:20:08.140 [2024-11-20 09:32:33.366798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:08.140 [2024-11-20 09:32:33.488830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.140 [2024-11-20 09:32:33.489652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.708 Running I/O for 5 seconds... 00:20:11.049 9254.00 IOPS, 36.15 MiB/s [2024-11-20T09:32:37.072Z] 9546.50 IOPS, 37.29 MiB/s [2024-11-20T09:32:38.447Z] 9716.67 IOPS, 37.96 MiB/s [2024-11-20T09:32:39.387Z] 9773.50 IOPS, 38.18 MiB/s [2024-11-20T09:32:39.387Z] 9827.00 IOPS, 38.39 MiB/s 00:20:13.931 Latency(us) 00:20:13.931 [2024-11-20T09:32:39.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.932 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:13.932 Verification LBA range: start 0x0 length 0x2000 00:20:13.932 raid5f : 5.02 4165.53 16.27 0.00 0.00 46281.09 1094.65 33884.12 00:20:13.932 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:13.932 Verification LBA range: start 0x2000 length 0x2000 00:20:13.932 raid5f : 5.02 5624.97 21.97 0.00 0.00 34321.55 275.45 34799.90 00:20:13.932 [2024-11-20T09:32:39.388Z] =================================================================================================================== 00:20:13.932 [2024-11-20T09:32:39.388Z] Total : 9790.50 38.24 0.00 0.00 39408.26 275.45 34799.90 00:20:15.324 00:20:15.324 real 0m7.394s 00:20:15.324 user 0m13.637s 00:20:15.324 sys 0m0.287s 00:20:15.324 09:32:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.324 09:32:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:15.324 ************************************ 00:20:15.324 END TEST bdev_verify 00:20:15.324 ************************************ 00:20:15.324 09:32:40 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:15.324 09:32:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:15.324 09:32:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.324 09:32:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:15.324 ************************************ 00:20:15.324 START TEST bdev_verify_big_io 00:20:15.324 ************************************ 00:20:15.324 09:32:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:15.324 [2024-11-20 09:32:40.651535] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:20:15.324 [2024-11-20 09:32:40.651649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91109 ] 00:20:15.582 [2024-11-20 09:32:40.815569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:15.582 [2024-11-20 09:32:40.933259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.582 [2024-11-20 09:32:40.933319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.150 Running I/O for 5 seconds... 00:20:18.464 506.00 IOPS, 31.62 MiB/s [2024-11-20T09:32:44.867Z] 695.50 IOPS, 43.47 MiB/s [2024-11-20T09:32:45.804Z] 655.00 IOPS, 40.94 MiB/s [2024-11-20T09:32:46.740Z] 681.75 IOPS, 42.61 MiB/s [2024-11-20T09:32:47.010Z] 660.00 IOPS, 41.25 MiB/s 00:20:21.554 Latency(us) 00:20:21.554 [2024-11-20T09:32:47.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.554 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:21.554 Verification LBA range: start 0x0 length 0x200 00:20:21.554 raid5f : 5.29 312.31 19.52 0.00 0.00 10180315.58 215.53 443240.86 00:20:21.554 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:21.554 Verification LBA range: start 0x200 length 0x200 00:20:21.554 raid5f : 5.26 362.35 22.65 0.00 0.00 8820849.48 321.96 390125.22 00:20:21.554 [2024-11-20T09:32:47.011Z] =================================================================================================================== 00:20:21.555 [2024-11-20T09:32:47.011Z] Total : 674.66 42.17 0.00 0.00 9452030.17 215.53 443240.86 00:20:22.937 00:20:22.937 real 0m7.608s 00:20:22.937 user 0m14.125s 00:20:22.937 sys 0m0.269s 00:20:22.937 09:32:48 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.937 09:32:48 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:22.937 ************************************ 00:20:22.937 END TEST bdev_verify_big_io 00:20:22.937 ************************************ 00:20:22.937 09:32:48 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:22.937 09:32:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:22.937 09:32:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.937 09:32:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.937 ************************************ 00:20:22.937 START TEST bdev_write_zeroes 00:20:22.937 ************************************ 00:20:22.937 09:32:48 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:22.937 [2024-11-20 09:32:48.344068] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:20:22.937 [2024-11-20 09:32:48.344216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91202 ] 00:20:23.203 [2024-11-20 09:32:48.526662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.204 [2024-11-20 09:32:48.639843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.772 Running I/O for 1 seconds... 00:20:25.148 25407.00 IOPS, 99.25 MiB/s 00:20:25.148 Latency(us) 00:20:25.148 [2024-11-20T09:32:50.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.148 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:25.148 raid5f : 1.01 25382.12 99.15 0.00 0.00 5027.18 1352.22 7097.35 00:20:25.148 [2024-11-20T09:32:50.604Z] =================================================================================================================== 00:20:25.148 [2024-11-20T09:32:50.604Z] Total : 25382.12 99.15 0.00 0.00 5027.18 1352.22 7097.35 00:20:26.527 00:20:26.527 real 0m3.544s 00:20:26.527 user 0m3.145s 00:20:26.527 sys 0m0.272s 00:20:26.527 09:32:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.527 09:32:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:26.527 ************************************ 00:20:26.527 END TEST bdev_write_zeroes 00:20:26.527 ************************************ 00:20:26.527 09:32:51 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:26.527 09:32:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:26.527 09:32:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.527 09:32:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:26.527 ************************************ 00:20:26.527 START TEST bdev_json_nonenclosed 00:20:26.527 ************************************ 00:20:26.527 09:32:51 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:26.527 [2024-11-20 09:32:51.950739] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:20:26.527 [2024-11-20 09:32:51.950865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91271 ] 00:20:26.786 [2024-11-20 09:32:52.112460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.045 [2024-11-20 09:32:52.249557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.045 [2024-11-20 09:32:52.249662] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:27.045 [2024-11-20 09:32:52.249692] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:27.045 [2024-11-20 09:32:52.249702] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:27.304 00:20:27.304 real 0m0.692s 00:20:27.304 user 0m0.453s 00:20:27.304 sys 0m0.134s 00:20:27.304 09:32:52 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.304 09:32:52 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:27.304 ************************************ 00:20:27.304 END TEST bdev_json_nonenclosed 00:20:27.304 ************************************ 00:20:27.304 09:32:52 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.304 09:32:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:27.304 09:32:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.304 09:32:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:27.304 ************************************ 00:20:27.304 START TEST bdev_json_nonarray 00:20:27.304 ************************************ 00:20:27.304 09:32:52 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:27.304 [2024-11-20 09:32:52.705276] Starting SPDK v25.01-pre git sha1 2741dd1ac / DPDK 24.03.0 initialization... 00:20:27.304 [2024-11-20 09:32:52.705409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91302 ] 00:20:27.564 [2024-11-20 09:32:52.882434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.564 [2024-11-20 09:32:53.015890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.564 [2024-11-20 09:32:53.016031] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:27.564 [2024-11-20 09:32:53.016059] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:27.564 [2024-11-20 09:32:53.016087] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:28.133 00:20:28.133 real 0m0.721s 00:20:28.133 user 0m0.481s 00:20:28.133 sys 0m0.134s 00:20:28.133 09:32:53 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.133 09:32:53 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:28.133 ************************************ 00:20:28.133 END TEST bdev_json_nonarray 00:20:28.133 ************************************ 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:28.133 09:32:53 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:28.133 00:20:28.133 real 0m49.538s 00:20:28.133 user 1m6.807s 00:20:28.133 sys 0m4.922s 00:20:28.133 09:32:53 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.133 09:32:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:28.133 ************************************ 00:20:28.133 END TEST blockdev_raid5f 00:20:28.133 ************************************ 00:20:28.133 09:32:53 -- spdk/autotest.sh@194 -- # uname -s 00:20:28.133 09:32:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:28.133 09:32:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:28.133 09:32:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:28.133 09:32:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:28.133 09:32:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.133 09:32:53 -- common/autotest_common.sh@10 -- # set +x 00:20:28.133 09:32:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:28.133 09:32:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:28.133 09:32:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:28.133 09:32:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:28.133 09:32:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:28.133 09:32:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:28.133 09:32:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:28.133 09:32:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.133 09:32:53 -- common/autotest_common.sh@10 -- # set +x 00:20:28.133 09:32:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:28.133 09:32:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:28.133 09:32:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:28.133 09:32:53 -- common/autotest_common.sh@10 -- # set +x 00:20:30.671 INFO: APP EXITING 00:20:30.671 INFO: killing all VMs 00:20:30.671 INFO: killing vhost app 00:20:30.671 INFO: EXIT DONE 00:20:30.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:30.931 Waiting for block devices as requested 00:20:30.931 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:31.191 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:31.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:32.020 Cleaning 00:20:32.020 Removing: /var/run/dpdk/spdk0/config 00:20:32.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:32.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:32.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:32.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:32.020 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:32.020 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:32.020 Removing: /dev/shm/spdk_tgt_trace.pid56996 00:20:32.020 Removing: /var/run/dpdk/spdk0 00:20:32.020 Removing: /var/run/dpdk/spdk_pid56760 00:20:32.020 Removing: /var/run/dpdk/spdk_pid56996 00:20:32.020 Removing: /var/run/dpdk/spdk_pid57231 00:20:32.020 Removing: /var/run/dpdk/spdk_pid57335 00:20:32.020 Removing: /var/run/dpdk/spdk_pid57391 00:20:32.020 Removing: /var/run/dpdk/spdk_pid57519 00:20:32.020 Removing: /var/run/dpdk/spdk_pid57543 00:20:32.020 Removing: /var/run/dpdk/spdk_pid57753 00:20:32.020 Removing: /var/run/dpdk/spdk_pid57864 00:20:32.020 Removing: /var/run/dpdk/spdk_pid57971 00:20:32.020 Removing: /var/run/dpdk/spdk_pid58093 00:20:32.020 Removing: /var/run/dpdk/spdk_pid58208 00:20:32.020 Removing: /var/run/dpdk/spdk_pid58247 00:20:32.020 Removing: /var/run/dpdk/spdk_pid58284 00:20:32.020 Removing: /var/run/dpdk/spdk_pid58360 00:20:32.020 Removing: /var/run/dpdk/spdk_pid58455 00:20:32.020 Removing: /var/run/dpdk/spdk_pid58913 00:20:32.020 Removing: /var/run/dpdk/spdk_pid58996 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59081 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59097 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59261 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59288 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59443 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59464 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59539 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59563 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59631 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59656 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59868 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59904 00:20:32.020 Removing: /var/run/dpdk/spdk_pid59993 00:20:32.020 Removing: /var/run/dpdk/spdk_pid61375 00:20:32.020 Removing: /var/run/dpdk/spdk_pid61592 00:20:32.020 Removing: /var/run/dpdk/spdk_pid61738 00:20:32.020 Removing: /var/run/dpdk/spdk_pid62392 00:20:32.020 Removing: /var/run/dpdk/spdk_pid62609 00:20:32.020 Removing: /var/run/dpdk/spdk_pid62755 00:20:32.280 Removing: /var/run/dpdk/spdk_pid63419 00:20:32.280 Removing: /var/run/dpdk/spdk_pid63750 00:20:32.280 Removing: /var/run/dpdk/spdk_pid63897 00:20:32.280 Removing: /var/run/dpdk/spdk_pid65316 00:20:32.280 Removing: /var/run/dpdk/spdk_pid65575 00:20:32.280 Removing: /var/run/dpdk/spdk_pid65726 00:20:32.280 Removing: /var/run/dpdk/spdk_pid67145 00:20:32.280 Removing: /var/run/dpdk/spdk_pid67405 00:20:32.280 Removing: /var/run/dpdk/spdk_pid67556 00:20:32.280 Removing: /var/run/dpdk/spdk_pid68958 00:20:32.280 Removing: /var/run/dpdk/spdk_pid69404 00:20:32.280 Removing: /var/run/dpdk/spdk_pid69555 00:20:32.280 Removing: /var/run/dpdk/spdk_pid71061 00:20:32.280 Removing: /var/run/dpdk/spdk_pid71327 00:20:32.280 Removing: /var/run/dpdk/spdk_pid71478 00:20:32.280 Removing: /var/run/dpdk/spdk_pid72981 00:20:32.280 Removing: /var/run/dpdk/spdk_pid73251 00:20:32.280 Removing: /var/run/dpdk/spdk_pid73398 00:20:32.280 Removing: /var/run/dpdk/spdk_pid74909 00:20:32.280 Removing: /var/run/dpdk/spdk_pid75407 00:20:32.280 Removing: /var/run/dpdk/spdk_pid75553 00:20:32.280 Removing: /var/run/dpdk/spdk_pid75702 00:20:32.280 Removing: /var/run/dpdk/spdk_pid76120 00:20:32.280 Removing: /var/run/dpdk/spdk_pid76860 00:20:32.280 Removing: /var/run/dpdk/spdk_pid77237 00:20:32.280 Removing: /var/run/dpdk/spdk_pid77926 00:20:32.280 Removing: /var/run/dpdk/spdk_pid78378 00:20:32.280 Removing: /var/run/dpdk/spdk_pid79137 00:20:32.280 Removing: /var/run/dpdk/spdk_pid79557 00:20:32.280 Removing: /var/run/dpdk/spdk_pid81533 00:20:32.280 Removing: /var/run/dpdk/spdk_pid81977 00:20:32.280 Removing: /var/run/dpdk/spdk_pid82431 00:20:32.280 Removing: /var/run/dpdk/spdk_pid84538 00:20:32.280 Removing: /var/run/dpdk/spdk_pid85029 00:20:32.280 Removing: /var/run/dpdk/spdk_pid85553 00:20:32.280 Removing: /var/run/dpdk/spdk_pid86617 00:20:32.280 Removing: /var/run/dpdk/spdk_pid86948 00:20:32.280 Removing: /var/run/dpdk/spdk_pid87903 00:20:32.280 Removing: /var/run/dpdk/spdk_pid88231 00:20:32.280 Removing: /var/run/dpdk/spdk_pid89171 00:20:32.280 Removing: /var/run/dpdk/spdk_pid89500 00:20:32.280 Removing: /var/run/dpdk/spdk_pid90175 00:20:32.280 Removing: /var/run/dpdk/spdk_pid90451 00:20:32.280 Removing: /var/run/dpdk/spdk_pid90524 00:20:32.280 Removing: /var/run/dpdk/spdk_pid90566 00:20:32.280 Removing: /var/run/dpdk/spdk_pid90830 00:20:32.280 Removing: /var/run/dpdk/spdk_pid91009 00:20:32.280 Removing: /var/run/dpdk/spdk_pid91109 00:20:32.280 Removing: /var/run/dpdk/spdk_pid91202 00:20:32.280 Removing: /var/run/dpdk/spdk_pid91271 00:20:32.280 Removing: /var/run/dpdk/spdk_pid91302 00:20:32.280 Clean 00:20:32.538 09:32:57 -- common/autotest_common.sh@1453 -- # return 0 00:20:32.539 09:32:57 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:32.539 09:32:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.539 09:32:57 -- common/autotest_common.sh@10 -- # set +x 00:20:32.539 09:32:57 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:32.539 09:32:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.539 09:32:57 -- common/autotest_common.sh@10 -- # set +x 00:20:32.539 09:32:57 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:32.539 09:32:57 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:32.539 09:32:57 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:32.539 09:32:57 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:32.539 09:32:57 -- spdk/autotest.sh@398 -- # hostname 00:20:32.539 09:32:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:32.798 geninfo: WARNING: invalid characters removed from testname! 00:20:59.344 09:33:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:59.344 09:33:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:00.718 09:33:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:03.255 09:33:28 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:05.158 09:33:30 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:07.058 09:33:32 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:09.586 09:33:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:09.586 09:33:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:09.586 09:33:34 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:09.586 09:33:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:09.586 09:33:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:09.586 09:33:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:09.586 + [[ -n 5425 ]] 00:21:09.586 + sudo kill 5425 00:21:09.594 [Pipeline] } 00:21:09.610 [Pipeline] // timeout 00:21:09.615 [Pipeline] } 00:21:09.629 [Pipeline] // stage 00:21:09.634 [Pipeline] } 00:21:09.649 [Pipeline] // catchError 00:21:09.658 [Pipeline] stage 00:21:09.660 [Pipeline] { (Stop VM) 00:21:09.673 [Pipeline] sh 00:21:09.952 + vagrant halt 00:21:12.484 ==> default: Halting domain... 00:21:20.615 [Pipeline] sh 00:21:20.896 + vagrant destroy -f 00:21:23.432 ==> default: Removing domain... 00:21:23.701 [Pipeline] sh 00:21:23.984 + mv output /var/jenkins/workspace/raid-vg-autotest_3/output 00:21:23.991 [Pipeline] } 00:21:24.002 [Pipeline] // stage 00:21:24.005 [Pipeline] } 00:21:24.017 [Pipeline] // dir 00:21:24.020 [Pipeline] } 00:21:24.031 [Pipeline] // wrap 00:21:24.035 [Pipeline] } 00:21:24.044 [Pipeline] // catchError 00:21:24.051 [Pipeline] stage 00:21:24.052 [Pipeline] { (Epilogue) 00:21:24.063 [Pipeline] sh 00:21:24.339 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:30.920 [Pipeline] catchError 00:21:30.921 [Pipeline] { 00:21:30.934 [Pipeline] sh 00:21:31.219 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:31.219 Artifacts sizes are good 00:21:31.230 [Pipeline] } 00:21:31.245 [Pipeline] // catchError 00:21:31.254 [Pipeline] archiveArtifacts 00:21:31.260 Archiving artifacts 00:21:31.362 [Pipeline] cleanWs 00:21:31.373 [WS-CLEANUP] Deleting project workspace... 00:21:31.373 [WS-CLEANUP] Deferred wipeout is used... 00:21:31.380 [WS-CLEANUP] done 00:21:31.381 [Pipeline] } 00:21:31.395 [Pipeline] // stage 00:21:31.400 [Pipeline] } 00:21:31.414 [Pipeline] // node 00:21:31.419 [Pipeline] End of Pipeline 00:21:31.452 Finished: SUCCESS